Loading…

Loading grant details…

Active CONTINUING GRANT National Science Foundation (US)

Career: IIS: RI: Improving Multi-Agent Reinforcement Learning for Cooperative, Partially Observable Settings

$5.58M USD

Funder National Science Foundation (US)
Recipient Organization Northeastern University
Country United States
Start Date Mar 01, 2021
End Date Feb 28, 2026
Duration 1,825 days
Number of Grantees 1
Roles Principal Investigator
Data Source National Science Foundation (US)
Grant ID 2044993
Grant Description

As intelligent systems become more prevalent, these systems will need to coordinate with each other (e.g., apps, robots, autonomous cars), resulting in multi-agent systems. Allowing multi-agent systems to learn will let them operate in more complex and realistic scenarios by adapting their behavior to fit specific needs. Reinforcement learning is a promising form of trial-and-error learning that has the potential to drastically improve outcomes in many multi-agent domains (e.g., warehouses, delivery), but new methods are required for coordinating the agents in realistic domains with noisy and limited communication and sensing (i.e., partial observability).

This project will develop these new reinforcement learning methods for coordinating teams of agents in various partially observable settings. The results will impact the development of future artificial intelligence (AI) and robotic systems and will be conveyed through outreach and educational activities.

This project will develop a number of novel methods for cooperative multi-agent reinforcement learning (MARL) under partial observability. MARL, the extension of reinforcement learning methods for multi-agent domains, has gained popularity for generating high-quality solutions in some domains, but more work is needed to make the methods more scalable and widely applicable.

Therefore, this project will first provide a better theoretical understanding of centralized training for decentralized execution methods. Centralized training for decentralized execution is the dominant paradigm in MARL where agents are trained offline and only executed online. The project will then develop new centralized training methods that are unbiased, scalable and perform well in a wide range of domains.

Second, the project will develop online decentralized learning methods that allow agents to learn online even in noisy multi-agent settings. Lastly, to allow agents to learn and execute in an asynchronous manner, the project will develop methods for asynchronous MARL as well as asynchronous hierarchical learning with learning over multiple layers of a hierarchy.

The resulting methods will significantly improve performance, stability and scalability of MARL methods and make them more generally applicable to large realistic domains.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

All Grantees

Northeastern University

Advertisement
Discover thousands of grant opportunities
Advertisement
Browse Grants on GrantFunds
Interested in applying for this grant?

Complete our application form to express your interest and we'll guide you through the process.

Apply for This Grant