Loading…

Loading grant details…

Active STANDARD GRANT National Science Foundation (US)

EPSCoR Research Fellows: @NASA: Safety-Directed Abstraction, Verification and Correction of Learning-Enabled Cyber-Physical Systems

$3M USD

Funder National Science Foundation (US)
Recipient Organization University of New Mexico
Country United States
Start Date Jan 01, 2025
End Date Dec 31, 2026
Duration 729 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2429506
Grant Description

With the rapid development of Artificial Intelligence (AI) and Machine Learning (ML) technologies, more and more Cyber-Physical Systems (CPSs) are equipped with low-level regulators or high-level decision makers as AI/ML models. However, the safety verification of such systems is more challenging than that on the general dynamical systems due to the complex interactions among the various components.

This project focuses on the development of new formal methods and tools which verify the safety of large-scale Learning-Enabled (LE) CPSs by computing size-reduced abstractions according to the safety properties. The methods also produce analytic verification results that can be used to diagnose the behavior of a system and generate solutions for improving its safety and robustness.

The project will provide a fellowship to an assistant professor and training for a graduate student at the University of New Mexico (UNM). The research work will be conducted in collaboration with researchers at NASA Marshall Space Flight Center. The developed techniques will be used to prove and improve the safety of the AI-controlled systems built by NASA.

Besides, the project is also going to strengthen the collaboration between UNM and NASA as well as broaden the participation of students/researchers from underrepresented groups.

This project proposes to develop a series of formal methods for abstracting, verifying and correcting an LE CPS whose components may or may not be explicitly described by formal models. The research content has the following core thrusts: (1) Safety-directed model reduction: An approach will be developed to compute size-reduced formal abstractions for the AI/ML components in an LE CPS regarding to its safety specification.

The obtained models are expected to be much less intricate than the original ones however the given safety property is preserved. (2) Safety verification via rigorous reachability analysis: We will develop a rigorous reachability analysis framework for verifying the safety of an abstracted LE CPS with uncertainties. We seek to extend the existing Taylor Model-based arithmetic by introducing more sophisticated simplification methods and more flexible remainder representations.

The reason to do so is to achieve a better tradeoff between accuracy and efficiency than the state of the arts. (3) Counterexample interpretation and model correction: An approach for obtaining analytic counterexample interpretations will be developed. Such an interpretation is expected to cover all counterexamples along with their causes in a safety verification task.

We will also investigate two ways (offline and online) to restrict the outputs of system components such that all counterexamples can be avoided. The developed approaches are expected to greatly improve the applicability of formal methods to analyze and improve large-scale autonomous systems.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

University of New Mexico

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant