Détail du document
Identifiant

oai:arXiv.org:2407.17686

Sujet
Computer Science - Machine Learnin... Computer Science - Computation and... Computer Science - Information The... Statistics - Machine Learning
Auteur
Rajaraman, Nived Bondaschi, Marco Ramchandran, Kannan Gastpar, Michael Makkuva, Ashok Vardhan
Catégorie

Computer Science

Année

2024

Date de référencement

31/07/2024

Mots clés
transformer \kth in-context represent sources transformers empirical
Métrique

Résumé

Attention-based transformers have been remarkably successful at modeling generative processes across various domains and modalities.

In this paper, we study the behavior of transformers on data drawn from \kth Markov processes, where the conditional distribution of the next symbol in a sequence depends on the previous $k$ symbols observed.

We observe a surprising phenomenon empirically which contradicts previous findings: when trained for sufficiently long, a transformer with a fixed depth and $1$ head per layer is able to achieve low test loss on sequences drawn from \kth Markov sources, even as $k$ grows.

Furthermore, this low test loss is achieved by the transformer's ability to represent and learn the in-context conditional empirical distribution.

On the theoretical side, our main result is that a transformer with a single head and three layers can represent the in-context conditional empirical distribution for \kth Markov sources, concurring with our empirical observations.

Along the way, we prove that \textit{attention-only} transformers with $O(\log_2(k))$ layers can represent the in-context conditional empirical distribution by composing induction heads to track the previous $k$ symbols in the sequence.

These results provide more insight into our current understanding of the mechanisms by which transformers learn to capture context, by understanding their behavior on Markov sources.

;Comment: 29 pages, 10 figures

Rajaraman, Nived,Bondaschi, Marco,Ramchandran, Kannan,Gastpar, Michael,Makkuva, Ashok Vardhan, 2024, Transformers on Markov Data: Constant Depth Suffices

Document

Ouvrir

Partager

Source

Articles recommandés par ES/IODE IA

MELAS: Phenotype Classification into Classic-versus-Atypical Presentations
presentations mitochondrial strokelike patients variability phenotype clinical melas
Protocol for the promoting resilience in stress management (PRISM) intervention: a multi-site randomized controlled trial for adolescents and young adults with advanced cancer
cancer quality of life anxiety depression hope coping skills communication intervention randomized ayas outcomes resilience care trial cancer prism-ac advanced