oai:arXiv.org:2407.02397
Computer Science
2024
09/10/2024
Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses.
These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but less attention is paid to what effective feedback for refinement looks like.
In this work, we propose looking at refinement with feedback as a composition of three distinct LLM competencies: (1) detection of bad generations; (2) fine-grained natural language critique generation; (3) refining with fine-grained feedback.
The first step can be implemented with a high-performing discriminative model and steps 2 and 3 can be implemented either via prompted or fine-tuned LLMs.
A key property of the proposed Detect, Critique, Refine ("DCR") method is that the step 2 critique model can give fine-grained feedback about errors, made possible by offloading the discrimination to a separate model in step 1.
We show that models of different capabilities benefit from refining with DCR on the task of improving factual consistency of document grounded summaries.
Overall, DCR consistently outperforms existing end-to-end refinement approaches and current trained models not fine-tuned for factuality critiquing.
;Comment: Code and models available at: https://github.com/ManyaWadhwa/DCR
Wadhwa, Manya,Zhao, Xinyu,Li, Junyi Jessy,Durrett, Greg, 2024, Learning to Refine with Fine-Grained Natural Language Feedback