Documentdetail
ID kaart

oai:arXiv.org:2408.13442

Onderwerp
Computer Science - Machine Learnin... Computer Science - Artificial Inte... Computer Science - Computation and... Statistics - Machine Learning
Auteur
He, Hangfeng Su, Weijie J.
Categorie

Computer Science

Jaar

2024

vermelding datum

28-08-2024

Trefwoorden
models language learning
Metriek

Beschrijving

Large language models (LLMs) have been widely employed across various application domains, yet their black-box nature poses significant challenges to understanding how these models process input data internally to make predictions.

In this paper, we introduce a precise and quantitative law that governs the learning of contextualized token embeddings through intermediate layers in pre-trained LLMs for next-token prediction.

Our findings reveal that each layer contributes equally to enhancing prediction accuracy, from the lowest to the highest layer -- a universal phenomenon observed across a diverse array of open-source LLMs, built on architectures such as Transformer, RWKV, and Mamba.

We demonstrate that this law offers new perspectives and insights to inform and guide practices in LLM development and applications, including model scaling, pre-training tasks, and information flow.

Overall, our law enables more fine-grained approaches to the design, training, and interpretation of LLMs through scrutinizing their internal data processing mechanisms.

He, Hangfeng,Su, Weijie J., 2024, A Law of Next-Token Prediction in Large Language Models

Document

Openen

Delen

Bron

Artikelen aanbevolen door ES/IODE AI

Batoclimab as induction and maintenance therapy in patients with myasthenia gravis: rationale and study design of a phase 3 clinical trial
gravis myasthenia study clinical phase baseline improvement mg-adl 340 week trial placebo period mg maintenance qw