Document detail
ID

oai:arXiv.org:2410.11055

Topic
Computer Science - Computation and... Computer Science - Artificial Inte...
Author
Yao, Jihan Ding, Wenxuan Feng, Shangbin Wang, Lucy Lu Tsvetkov, Yulia
Category

Computer Science

Year

2024

listing date

10/23/2024

Keywords
answers wrong-over-wrong wrong preferences
Metrics

Abstract

In the absence of abundant reliable annotations for challenging tasks and contexts, how can we expand the frontier of LLM capabilities with potentially wrong answers?

We focus on two research questions: (1) Can LLMs generate reliable preferences among wrong options?

And if so, (2) Would alignment with such wrong-over-wrong preferences be helpful?

We employ methods based on self-consistency, token probabilities, and LLM-as-a-judge to elicit wrong-over-wrong preferences, and fine-tune language models with preference optimization approaches using these synthesized preferences.

Extensive experiments with seven LLMs and eight datasets demonstrate that (1) LLMs do have preliminary capability in distinguishing various shades of wrong, achieving up to 20.9% higher performance than random guess; (2) Alignment with wrong-over-wrong preferences helps LLMs to produce less wrong and sometimes even outright correct answers, while overall improving model calibration.

Yao, Jihan,Ding, Wenxuan,Feng, Shangbin,Wang, Lucy Lu,Tsvetkov, Yulia, 2024, Varying Shades of Wrong: Aligning LLMs with Wrong Answers Only

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Multiplexed live-cell imaging for drug responses in patient-derived organoid models of cancer
cell organoid patient-derived kinetic system effects cancer pdo models