Dokumentdetails
ID

oai:arXiv.org:2406.01306

Thema
Computer Science - Computation and...
Autor
Qu, Fanyi Sun, Hao Wu, Yunfang
Kategorie

Computer Science

Jahr

2024

Auflistungsdatum

05.06.2024

Schlüsselwörter
language model distractor dg
Metrisch

Zusammenfassung

Within the context of reading comprehension, the task of Distractor Generation (DG) aims to generate several incorrect options to confuse readers.

Traditional supervised methods for DG rely heavily on expensive human-annotated distractor labels.

In this paper, we propose an unsupervised DG framework, leveraging Large Language Models (LLMs) as cost-effective annotators to enhance the DG capability of smaller student models.

Specially, to perform knowledge distilling, we propose a dual task training strategy that integrates pseudo distractors from LLMs and the original answer in-formation as the objective targets with a two-stage training process.

Moreover, we devise a counterfactual contrastive decoding mechanism for increasing the distracting capability of the DG model.

Experiments show that our unsupervised generation method with Bart-base greatly surpasses GPT-3.5-turbo performance with only 200 times fewer model parameters.

Our proposed unsupervised DG method offers a cost-effective framework for practical reading comprehension applications, without the need of laborious distractor annotation and costly large-size models ;Comment: Accepted as a long paper in ACL 2024 findings

Qu, Fanyi,Sun, Hao,Wu, Yunfang, 2024, Unsupervised Distractor Generation via Large Language Model Distilling and Counterfactual Contrastive Decoding

Dokumentieren

Öffnen

Teilen

Quelle

Artikel empfohlen von ES/IODE AI