detalle del documento
IDENTIFICACIÓN

oai:arXiv.org:2406.19354

Tema
Computer Science - Computation and... Computer Science - Artificial Inte...
Autor
Hase, Peter Hofweber, Thomas Zhou, Xiang Stengel-Eskin, Elias Bansal, Mohit
Categoría

Computer Science

Año

2024

fecha de cotización

3/7/2024

Palabras clave
belief revision models editing model
Métrico

Resumen

The model editing problem concerns how language models should learn new facts about the world over time.

While empirical research on model editing has drawn widespread attention, the conceptual foundations of model editing remain shaky -- perhaps unsurprisingly, since model editing is essentially belief revision, a storied problem in philosophy that has eluded succinct solutions for decades.

Model editing nonetheless demands a solution, since we need to be able to control the knowledge within language models.

With this goal in mind, this paper critiques the standard formulation of the model editing problem and proposes a formal testbed for model editing research.

We first describe 12 open problems with model editing, based on challenges with (1) defining the problem, (2) developing benchmarks, and (3) assuming LLMs have editable beliefs in the first place.

Many of these challenges are extremely difficult to address, e.g. determining far-reaching consequences of edits, labeling probabilistic entailments between facts, and updating beliefs of agent simulators.

Next, we introduce a semi-synthetic dataset for model editing based on Wikidata, where we can evaluate edits against labels given by an idealized Bayesian agent.

This enables us to say exactly how belief revision in language models falls short of a desirable epistemic standard.

We encourage further research exploring settings where such a gold standard can be compared against.

Our code is publicly available at: https://github.com/peterbhase/LLM-belief-revision ;Comment: 23 pages, 4 figures

Hase, Peter,Hofweber, Thomas,Zhou, Xiang,Stengel-Eskin, Elias,Bansal, Mohit, 2024, Fundamental Problems With Model Editing: How Should Rational Belief Revision Work in LLMs?

Documento

Abrir

Compartir

Fuente

Artículos recomendados por ES/IODE IA