Loading…
Loading grant details…
| Funder | National Science Foundation (US) |
|---|---|
| Recipient Organization | University of Maryland, College Park |
| Country | United States |
| Start Date | Feb 15, 2025 |
| End Date | Jan 31, 2030 |
| Duration | 1,811 days |
| Number of Grantees | 1 |
| Roles | Principal Investigator |
| Data Source | National Science Foundation (US) |
| Grant ID | 2443704 |
Recent years have witnessed significant progress of learning in dynamic environments. Many such success stories, e.g., AlphaGo, autonomous driving, and robot learning, naturally involve “multiple agents”. Note that these agents usually operate under “Information Constraints” as the underlying system state is only partially observable, and each agent only has local information that differs across agents.
Despite the practical relevance, theoretical foundations for such settings are not well developed. The PI proposes three novel research thrusts (RTs) that bridge the insights from Control Theory and Machine Learning (ML). In RT 1, PI will formally introduce “Information Structure”, a well-studied notion in Decentralized Stochastic Control, into the theoretical studies of dynamic multi-agent learning.
In RT 2, PI will theoretically ground several new empirical paradigms that address Information Constraints in multi-agent dynamic learning, with non-asymptotic complexity analyses. In RT 3, PI will introduce the perspective of “Learning-in-Games”, justifying equilibrium as the “emerging” outcome from agents’ independent learning, into this information-constrained and dynamic setting.
The principles and algorithms will be validated on new testbeds both in simulations and on robotic platforms. This program is expected to bring significant changes to the foundations of multi-agent learning in dynamic environments.
From a ML perspective, the introduction of Control principles as Information Structures will help ground many empirical advances systematically. From a Control perspective, the formalization of the new ML regimes will open up new research problems to further understand the “Value of Information” for “Learning” purposes in multi-agent systems, in addition to the well-studied subject of its value for “Optimization” purposes.
Instantiating the “Learning-in-Games” perspective necessitates completely new solution concepts and learning dynamics. The proposed research is interdisciplinary, integrating fundamentals from Control Theory, Game Theory, Statistics, Theory of Computation, and Economics.
This program will advance the fundamental science of principled Large-Scale Autonomy, with broader impacts on socio-technical applications, including transportation systems, power networks, robotics, and supply chains. The program will design new curricula particularly on multi-agent dynamic learning, mentoring undergraduate students for robotic competitions, outreach to underrepresented K-12 students, and building the community of multi-agent learning through new academic events and diverse industry collaborations.
This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
University of Maryland, College Park
Complete our application form to express your interest and we'll guide you through the process.
Apply for This Grant