Document detail
ID

oai:arXiv.org:2408.13442

Topic
Computer Science - Machine Learnin... Computer Science - Artificial Inte... Computer Science - Computation and... Statistics - Machine Learning
Author
He, Hangfeng Su, Weijie J.
Category

Computer Science

Year

2024

listing date

8/28/2024

Keywords
models language learning
Metrics

Abstract

Large language models (LLMs) have been widely employed across various application domains, yet their black-box nature poses significant challenges to understanding how these models process input data internally to make predictions.

In this paper, we introduce a precise and quantitative law that governs the learning of contextualized token embeddings through intermediate layers in pre-trained LLMs for next-token prediction.

Our findings reveal that each layer contributes equally to enhancing prediction accuracy, from the lowest to the highest layer -- a universal phenomenon observed across a diverse array of open-source LLMs, built on architectures such as Transformer, RWKV, and Mamba.

We demonstrate that this law offers new perspectives and insights to inform and guide practices in LLM development and applications, including model scaling, pre-training tasks, and information flow.

Overall, our law enables more fine-grained approaches to the design, training, and interpretation of LLMs through scrutinizing their internal data processing mechanisms.

He, Hangfeng,Su, Weijie J., 2024, A Law of Next-Token Prediction in Large Language Models

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Batoclimab as induction and maintenance therapy in patients with myasthenia gravis: rationale and study design of a phase 3 clinical trial
gravis myasthenia study clinical phase baseline improvement mg-adl 340 week trial placebo period mg maintenance qw