Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | Purdue University |
| Country | United States |
| Start Date | May 01, 2021 |
| End Date | Dec 31, 2024 |
| Duration | 1,340 days |
| Number of Grantees | 3 |
| Roles | Principal Investigator; Co-Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2114974 |
Artificial intelligence (AI) techniques, especially machine learning (ML), show great promise for improving quality of life. However, recent research has demonstrated that AI techniques can be manipulated, evaded, and misled. While progress has been made to better understand the trustworthiness and security of AI techniques, little has been done to translate this knowledge to education and training.
There is a critical need to foster a qualified cybersecurity workforce that understands the usefulness, limitations, and best practices of AI technologies in the cybersecurity domain. This project will address this important issue by designing and implementing a virtual, proactive, and collaborative learning paradigm that can engage learners with different backgrounds.
The approach will benefit a wide range of learners, especially underrepresented students. It will also help the general public understand the security implications of AI. This project has the ability to transform education at the intersection of cybersecurity and AI/ML; shed light on explainable AI in cybersecurity; and grow a cybersecurity workforce that possesses AI competencies.
Products, including the research findings and curriculum, will be disseminated through a variety of mechanisms, such as workshops, peer-reviewed conferences, and journals.
This project builds research and education capacity through the formation of a multidisciplinary team with expertise in cybersecurity, AI, and statistics. The team will systematically investigate two cohesive research and education goals. First, an immersive learning environment will be developed to motivate students to explore AI/ML development in the context of real-world cybersecurity scenarios by constructing learning models with tangible objects.
The proposed learning environment enables an AI/ML mechanism that will provide personalized explanations on the AI/ML outputs by considering the distinct background knowledge of the individual learners. Second, the team will design a proactive education paradigm encourages students to collaboratively identify new AI/ML-specific threats in the cybersecurity domain and develop innovative and trustworthy AI/ML solutions.
The learning paradigm will ultimately enable effective retention and transfer of multidisciplinary AI-cybersecurity knowledge.
This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
Purdue University
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant