Loading…

Loading grant details…

Active CONTINUING GRANT National Science Foundation (US)

CAREER: Foundations of Scalable and Resilient Distributed Real-Time Decision Making in Open Multi-Agent Systems

$5.18M USD

Funder National Science Foundation (US)
Recipient Organization University of Texas At Austin
Country United States
Start Date Mar 01, 2025
End Date Feb 28, 2029
Duration 1,460 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2527059
Grant Description

Advances in artificial intelligence and machine learning provide the opportunity for using autonomous multi-agent systems to solve important social and economic problems, such as the application of multiple robots in wildfire monitoring, search-and-rescue, manufacturing, etc. In these systems, agents autonomously cooperate to make decisions in real-time to perform complex tasks.

Reinforcement learning, a data-driven control method that enables agents to autonomously learn desired tasks by interacting directly with the environment, has emerged as one of the predominant frameworks for this kind of real-time decision making. While reinforcement learning provides a powerful and flexible framework, it suffers fundamental challenges in its scalability and resilience.

Specifically, existing methods require a vast amount of data and computational power and can be unstable in the presence of various types of errors and adversaries. These challenges are the main barriers to the wide applicability of reinforcement learning for real-world problems. This CAREER project will develop new foundations of scalable and resilient distributed reinforcement learning for real-time autonomous cooperation in open multi-agent systems.

The overarching goal is to design new learning and control methods that enable agents to interact effectively in open systems, adapt gracefully in time-varying environments, and be resilient to unexpected failures and adversaries. The project will also contribute to education and workforce development by integrating the research findings with rigorous educational and outreach activities, course development, student training, and public partnerships.

The central idea of this project is to establish new fundamentals of two-time-scale stochastic approximation for non-monotone systems. The key approach is to leverage extrapolation techniques in optimization and singular perturbation theories in control to address the instability issues of stochastic approximation under non-monotone settings. New theoretical principles will be studied to characterize the finite-time complexity of the proposed methods.

By leveraging these new results of two-time-scale stochastic approximation, this project will advance several foundational aspects of distributed learning and control in open multi-agent systems. The focus is to develop new scalable and resilient distributed multi-time-scale reinforcement learning methods that allow agents to cooperate efficiently in real-time under diverse practical considerations, including time-varying numbers of agents, unexpected failures, communication constraints, and adversaries.

During the course of this project, the proposed research activities will be evaluated systematically through a series of simulations and field experiments of multi-robot navigation.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

University of Texas At Austin

Advertisement
Apply for grants with GrantFunds
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant