Document detail
ID

oai:arXiv.org:2405.00902

Topic
Computer Science - Machine Learnin... Computer Science - Artificial Inte... Computer Science - Multiagent Syst...
Author
Zhang, Zhicheng Liang, Yancheng Wu, Yi Fang, Fei
Category

Computer Science

Year

2024

listing date

5/8/2024

Keywords
policies tasks learning
Metrics

Abstract

Multi-agent reinforcement learning (MARL) algorithms often struggle to find strategies close to Pareto optimal Nash Equilibrium, owing largely to the lack of efficient exploration.

The problem is exacerbated in sparse-reward settings, caused by the larger variance exhibited in policy learning.

This paper introduces MESA, a novel meta-exploration method for cooperative multi-agent learning.

It learns to explore by first identifying the agents' high-rewarding joint state-action subspace from training tasks and then learning a set of diverse exploration policies to "cover" the subspace.

These trained exploration policies can be integrated with any off-policy MARL algorithm for test-time tasks.

We first showcase MESA's advantage in a multi-step matrix game.

Furthermore, experiments show that with learned exploration policies, MESA achieves significantly better performance in sparse-reward tasks in several multi-agent particle environments and multi-agent MuJoCo environments, and exhibits the ability to generalize to more challenging tasks at test time.

;Comment: Accepted to AAMAS 2024.

15 pages

Zhang, Zhicheng,Liang, Yancheng,Wu, Yi,Fang, Fei, 2024, MESA: Cooperative Meta-Exploration in Multi-Agent Learning through Exploiting State-Action Space Structure

Document

Open

Share

Source

Articles recommended by ES/IODE AI

An Updated Overview of Existing Cancer Databases and Identified Needs
advancements insights assess review lipidomics glycomics proteomics databases research cancer