Loading…

Loading grant details…

Active CONTINUING GRANT National Science Foundation (US)

CAREER: Robustness Verification and Certified Defense for Machine Learning Models

$5.16M USD

Funder National Science Foundation (US)
Recipient Organization University of California-Los Angeles
Country United States
Start Date Mar 15, 2021
End Date Feb 28, 2026
Duration 1,811 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2048280
Grant Description

Machine learning models perform very well on many important tasks; however, those models are not guaranteed to be always safe due to their black-box nature. This becomes a critical challenge when deploying models into real world systems. For example, an aircraft control system has to perform a certain action when detecting a nearby intruder.

A self-driving car has to recognize stop signs even under small perturbations. This project will develop a framework to verify and improve the safety of machine learning models. The proposed verification methods will be efficient and support a wide range of structures.

Further, the framework can be used to train models that are guaranteed to meet some safety specifications. These functionalities will enable safe models for a much broader range of applications beyond small neural networks. The project supports education and diversity through the recruitment of a diverse team.

The research results will be integrated into textbooks, courses, and outreach activities on AI safety.

The goal of this project is to enable machine learning verification for more general models and to make it easily applicable to users in the application domains. To achieve this goal, we will build an automatic verification algorithm based on a convex relaxation framework. In this framework, model verification can be posed as an optimization problem, and (convex or linear) relaxations are used to get an efficient solution.

By generalizing this framework to a general computational graph, we will design an automatic verification algorithm. The algorithm will run automatically for any model specified by a user, without the need of re-deriving a new verification procedure for each new model. In addition to allowing a wider range of models, the project will also enable verification of more complex semantic perturbations.

The investigator will also study verification of discrete models (e.g., KNN or tree ensembles) under an optimization-based framework. Finally, the proposed research will enable training models with verifiable properties that can be applied to many real-world applications through interdisciplinary collaborations.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

University of California-Los Angeles

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant