Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | West Virginia University Research Corporation |
| Country | United States |
| Start Date | Feb 01, 2022 |
| End Date | Jan 31, 2024 |
| Duration | 729 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2132060 |
With the rapid growth of interest in the use of artificial intelligence (AI) in autonomy, it is critical to revolutionize safety validation approaches that reason about the safety behaviors of a complex AI-enabled autonomous system. The goal of this project is to build trust in AI-enabled complex systems for safety-critical applications. This trust could be built by means of some offline or online validation processes.
The drawback of online verification methods is that they require some form of real-world deployments which could be unsafe and risky. Typically, it is of great interest to reveal possible failure scenarios in a simulated environment before deploying an AI-based decision-making system into the real world. Since the space of failure events and corner cases is extensive in complex systems, the validation process might be very time-consuming as a huge number of experiments are required for safety validation.
This project aims to develop approaches that capture information from multiple sources to significantly speed up the validation process and reduce the overall computational cost. Through the collaboration with the Stanford Center for AI Safety, the PI and graduate trainee will gain invaluable training opportunities that will help to build a strong STEM research and education partnership between West Virginia University (WVU) and Stanford.
The overarching objective of this project is to develop algorithms for safety validation of autonomous systems that reason about the safety behaviors of autonomous systems from multiple sources of information. The central philosophy behind this work is that a cyber-physical system (CPS) or a robot can query data from multiple sources, including different levels of granularity in simulation, offline or online real-world data, and/or human expert inputs.
Currently, there is no rigorous mechanism to reason about the safety behaviors of a learning-enabled decision-making system that optimally considers data from different sources of information. This research will combine the decision making under uncertainty and formal methods expertise at the Stanford Center for AI Safety and the PI's expertise in machine learning and data-driven optimization techniques to arrive at safety validation frameworks that leverage data from multiple sources.
The PI and his students will develop tools from data-driven optimization and reinforcement learning algorithms to identify failure events from multiple sources of information. The proposed algorithms will be applied to a suite of simulated environments for autonomous driving. This research will significantly extend the tools and open-source software for the safety validation of autonomous systems.
Through collaboration with the Stanford Center for AI Safety, the PI will maintain ties between academia and industry, open new avenues for joint proposal writing, joint journal publications, and student exchange programs between Stanford and WVU. The proposal will be integrated into an educational plan that involves undergraduate and graduate students in research and enriches the curriculum of robotics and engineering at West Virginia University (WVU).
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
West Virginia University Research Corporation
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant