Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Northeastern University |
| Country | United States |
| Start Date | Oct 01, 2024 |
| End Date | Sep 30, 2027 |
| Duration | 1,094 days |
| Number of Grantees | 2 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2331081 |
Cyber attacks have become increasingly more sophisticated, coordinated, and widespread over time. Automated machine learning (ML) techniques to proactively detect and prevent malicious activities are becoming popular, but these defenses are themselves susceptible to stronger attacks. Adversaries can compromise ML-based systems at both training time (poisoning attacks) and deployment time (evasion attacks).
This project’s novelties are creating novel methods and tools that can be used to investigate real-world poisoning attacks and designing feasible mitigation techniques against these attacks. The project’s broader significance is to put forward recommendations and create software tools to help practitioners on the use of ML for preventing malicious activities on cyber networks.
The project team has expertise in machine learning and cybersecurity, and plans a set of education tasks and outreach activities which include public release of course materials on ML security, mentoring undergraduate and graduate students in research projects, and collaboration with industry partners to transfer the developed technologies to practice.
This project considers three interconnected thrusts addressing different aspects of understanding ML poisoning attacks and building defenses against malicious activities in cyber networks. The first two thrusts seek understanding on poisoning attacks against supervised learning, semi-supervised learning and unsupervised learning. The team will adopt techniques such as explanation-based ML methods and generative models for identifying stealthy poisoning attacks.
The third thrust introduces novel poisoning-resilient machine learning defenses based on data sanitization and training ensemble models in order to achieve certified robustness of ML systems against poisoning attacks. These research thrusts constitute the foundation for creating and transitioning resilient AI/ML models to industry/DoD partners to enable protection of real-world cyber networks against advanced attacks.
The team will work with its industry partners to help them to adopt the created techniques from the project for DoD applications.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Northeastern University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant