Document detail
ID

oai:arXiv.org:2404.09982

Topic
Computer Science - Computation and...
Author
Gao, Hang Zhang, Yongfeng
Category

Computer Science

Year

2024

listing date

7/10/2024

Keywords
enhance memories agents language
Metrics

Abstract

The adaptation of Large Language Model (LLM)-based agents to execute tasks via natural language prompts represents a significant advancement, notably eliminating the need for explicit retraining or fine tuning, but are constrained by the comprehensiveness and diversity of the provided examples, leading to outputs that often diverge significantly from expected results, especially when it comes to the open-ended questions.

This paper introduces the Memory Sharing, a framework which integrates the real-time memory filter, storage and retrieval to enhance the In-Context Learning process.

This framework allows for the sharing of memories among multiple agents, whereby the interactions and shared memories between different agents effectively enhance the diversity of the memories.

The collective self-enhancement through interactive learning among multiple agents facilitates the evolution from individual intelligence to collective intelligence.

Besides, the dynamically growing memory pool is utilized not only to improve the quality of responses but also to train and enhance the retriever.

We evaluated our framework across three distinct domains involving specialized tasks of agents.

The experimental results demonstrate that the MS framework significantly improves the agents' performance in addressing open-ended questions.

Gao, Hang,Zhang, Yongfeng, 2024, Memory Sharing for Large Language Model based Agents

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Should we consider Systemic Inflammatory Response Index (SIRI) as a new diagnostic marker for rectal cancer?
inflammation rectal surgery overall survival complication significantly diagnostic value cancer rectal 38 siri