Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of Florida |
| Country | United States |
| Start Date | Jan 01, 2021 |
| End Date | Dec 31, 2025 |
| Duration | 1,825 days |
| Number of Grantees | 2 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2055123 |
Surveillanceware (i.e., stalkerware, creepware, spyware, etc.) is a serious and increasingly common cybersecurity threat. In a typical scenario, a malicious individual installs software on a victim's mobile device that tracks the device's location, enabling remote monitoring of its activity. This is not a hypothetical threat: there are reports of intimate partner abusers installing spyware on their victims' smartphones and of journalists, political dissidents, and human rights activists being similarly targeted by repressive regimes.
Traditional defenses such as antivirus software are unable to fully counter this threat. While antivirus software may be able to flag and remove surveillanceware, some victims are unable to uninstall surveillanceware because of coercion such as threats of physical violence. This project seeks to systematically study surveillanceware and develop new artificial intelligence (AI)-based defenses for it.
In doing so, the project helps broaden cybersecurity research to include the concerns of vulnerable individuals and groups (e.g., survivors of intimate partner violence) whose cybersecurity needs have often historically been neglected. To pursue the project, the investigators plan to assemble a diverse team and collaborate with local organizations (e.g., domestic abuse shelters) and international partners (e.g., the Coalition Against Stalkerware).
The focus of this research effort is the design of methods and tools to mitigate the threat of surveillanceware, and in particular, developing a deception-based system that uses machine learning techniques and system security mechanisms to produce fake but plausible ("synthetic") data to be fed to the monitoring apparatus of surveillanceware instead of the real data. The research is naturally organized into three thrusts, starting with a comprehensive analysis of surveillanceware and its capabilities for the purpose of adversarial modeling.
The second thrust builds on this analysis to develop techniques to create fake but plausible data that can be used as decoy. This requires the use of machine learning techniques, specifically deep generative models. The final thrust involves designing system mechanisms that can be combined with the machinery developed in the previous thrust to ensure the integrity of the defense.
In so doing, the project will move forward an understanding of formal adversarial models for surveillanceware, techniques for synthesizing plausible data and deniable data embedding, and system-level mechanisms that integrate with machine learning techniques to thwart surveillance.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of Florida
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant