Loading…

Loading grant details…

Completed CONTINUING GRANT National Science Foundation (US)

CAREER: Taming Networks in the Wild: A Safety-Centric Network Learning Framework

$1.98M USD

Funder National Science Foundation (US)
Recipient Organization Brandeis University
Country United States
Start Date Jul 01, 2024
End Date Apr 30, 2025
Duration 303 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2340346
Grant Description

Despite recent successes, Artificial Intelligence (AI) has also shown to be lacking in areas such as safety. Therefore, it is imperative to prioritize safety as a fundamental aspect across various domains of AI research. Notably, due to the ubiquity of network (graph) data, network learning techniques have strongly impacted AI in the past decade, being widely deployed across applications such as social networks, healthcare, and cybersecurity.

However, real-world networks often include challenges like data issues and unforeseen environmental hazards, resulting in risky AI techniques and unsafe outcomes when these networks are employed. Existing studies for safe network learning lack versatility, efficiency, and comprehensive integration across multiple safety dimensions. They struggle to ensure timely, safe predictions in practical scenarios and fail to holistically address data, model, and usage aspects.

These challenges collectively hinder their capability and effectiveness, and there is currently a lack of a holistic framework to adequately tackle safety issues in network learning. To bridge this research gap, the goal of this project is to design, develop, and evaluate a novel Safety-centric Network Learning framework (SNL) for safe decision-making on networks in the wild.

The project outcomes will substantially impact network learning research and offer advanced solutions to address challenges across diverse domains such as public health, cybersecurity, and social media. Additionally, the project will foster interdisciplinary collaborations and facilitate technology transfer to industry. The project outcomes will be made publicly accessible and broadly disseminated.

Moreover, the project will integrate research with education through novel curriculum development and student mentoring activities with an emphasis on underrepresented groups, aiming to train and educate future generations in effectively developing and utilizing AI while also ensuring AI safety.

The project will involve comprehensive efforts to develop SNL that prioritizes the critical safety dimensions of reliability, stability, and explainability, and further encompasses the crucial aspects of data, model, and usage in a general, efficient, and integrated manner. Formally, SNL will provide reliable network data and network learning models (reliability) while generating stable and consistent outputs (stability) accompanied by easily understandable usage explanations (explainability).

Specifically, the research components that engage innovative theories, algorithms, and models in this project are fourfold. First, design novel network learning algorithms to identify and generate reliable network data that are minimally impacted by data and environmental issues. Second, create new network representation learning models and training strategies to promote efficiency and reliability in learning network representations.

Third, devise innovative data-to-model optimization theories to ensure the stability of network learning. Finally, develop novel generative learning methods to advance the usage explainability and output receptivity of network learning. The unified framework allows seamless collaboration and mutual reinforcement among different research components.

Through the convergent research program, the project will not only make significant advancements in network learning and AI safety research but also shed novel insights to tackle various societal challenges, ultimately benefiting society at large.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Brandeis University

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant