Loading…

Loading grant details…

Completed STANDARD GRANT National Science Foundation (US)

I-Corps: Robust and Explainable Artificial Intelligence (AI) Tools for Detecting Deepfakes

$500K USD

Funder National Science Foundation (US)
Recipient Organization Purdue University
Country United States
Start Date Jan 15, 2025
End Date Dec 31, 2025
Duration 350 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2448500
Grant Description

The broader impact of this I-Corps project is the development of robust and explainable artificial intelligence (AI) tools for detecting deepfakes. Deep Fakes or AI-generated images, audio, and videos mimicking real people have emerged across various sectors, posing significant threats. The deepfake AI market is expected to undergo significant growth, rising from a value of $564 million in 2024 to an impressive $5,134 million by 2030.

This technology is designed to meet the needs of industries highly vulnerable to deepfake threats. Journalists and news organizations require reliable tools to verify content authenticity, preserving credibility and public trust. In media and entertainment sectors, this technology protects intellectual property and ensures content integrity.

The finance sector benefits from enhanced fraud detection, while cybersecurity firms can use the developed tools to strengthen defenses against sophisticated cyber-attacks. By providing robust and explainable AI tools, this project enhances detection accuracy, builds public trust, and holds significant commercial potential.

This I-Corps project utilizes experiential learning coupled with a first-hand investigation of the industry ecosystem to assess the translation potential of the technology. This solution is based on the development of generalizable deepfake detectors by focusing on extracting common forgery features from various deepfakes. These features will be used to train a highly accurate deepfake detector, enhanced with robust approaches to improve detector reliability.

This robust detector will be integrated with physical and physiological-based explainable methods to provide clear explanations for detection results. To achieve this, a comprehensive library of high-quality, reusable physical/physiological-based models is built. The library makes these methods easily accessible with readable, usable, and maintainable code.

To simplify the end user's understanding of detection results, large language models are integrated into the system to provide concise textual explanations and reports.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Purdue University

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant