Document detail
ID

oai:arXiv.org:2410.22690

Topic
Computer Science - Machine Learnin... Computer Science - Artificial Inte...
Author
Marklund, Henrik Van Roy, Benjamin
Category

Computer Science

Year

2024

listing date

12/25/2024

Keywords
agents models aligned based bootstrapped learning reward function model return choice choices partial
Metrics

Abstract

As AI agents generate increasingly sophisticated behaviors, manually encoding human preferences to guide these agents becomes more challenging.

To address this, it has been suggested that agents instead learn preferences from human choice data.

This approach requires a model of choice behavior that the agent can use to interpret the data.

For choices between partial trajectories of states and actions, previous models assume choice probabilities are determined by the partial return or the cumulative advantage.

We consider an alternative model based instead on the bootstrapped return, which adds to the partial return an estimate of the future return.

Benefits of the bootstrapped return model stem from its treatment of human beliefs.

Unlike partial return, choices based on bootstrapped return reflect human beliefs about the environment.

Further, while recovering the reward function from choices based on cumulative advantage requires that those beliefs are correct, doing so from choices based on bootstrapped return does not.

To motivate the bootstrapped return model, we formulate axioms and prove an Alignment Theorem.

This result formalizes how, for a general class of preferences, such models are able to disentangle goals from beliefs.

This ensures recovery of an aligned reward function when learning from choices based on bootstrapped return.

The bootstrapped return model also affords greater robustness to choice behavior.

Even when choices are based on partial return, learning via a bootstrapped return model recovers an aligned reward function.

The same holds with choices based on the cumulative advantage if the human and the agent both adhere to correct and consistent beliefs about the environment.

On the other hand, if choices are based on bootstrapped return, learning via partial return or cumulative advantage models does not generally produce an aligned reward function.

Marklund, Henrik,Van Roy, Benjamin, 2024, Choice Between Partial Trajectories: Disentangling Goals from Beliefs

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Multiplexed live-cell imaging for drug responses in patient-derived organoid models of cancer
cell organoid patient-derived kinetic system effects cancer pdo models