Document detail
ID

oai:arXiv.org:2409.00352

Topic
Computer Science - Computation and... Computer Science - Machine Learnin...
Author
Oh, Hongseok Hwang, Wonseok
Category

Computer Science

Year

2024

listing date

2/12/2025

Keywords
calibration analysis
Metrics

Abstract

Large Language Models (LLMs) have shown remarkable progress, but their real-world application necessitates reliable calibration.

This study conducts a comprehensive analysis of calibration degradation of LLMs across four dimensions: models, calibration metrics, tasks, and confidence extraction methods.

Initial analysis showed that the relationship between alignment and calibration is not always a trade-off, but under stricter analysis conditions, we found the alignment process consistently harms calibration.

This highlights the need for (1) a careful approach when measuring model confidences and calibration errors and (2) future research into algorithms that can help LLMs to achieve both instruction-following and calibration without sacrificing either.

;Comment: Presented at the BlackboxNLP Workshop at EMNLP 2024 (Poster)

Oh, Hongseok,Hwang, Wonseok, 2024, Does Alignment Tuning Really Break LLMs' Internal Confidence?

Document

Open

Share

Source

Articles recommended by ES/IODE AI