Dokumentdetails
ID

oai:arXiv.org:2408.17324

Thema
Computer Science - Machine Learnin... Computer Science - Artificial Inte... Computer Science - Computation and... 68T07 (Primary) 68Q32, 68T05 (Seco... I.2.4 I.2.6 I.2.7
Autor
Pochinkov, Nicholas Jones, Thomas Rahman, Mohammed Rashidur
Kategorie

Computer Science

Jahr

2024

Auflistungsdatum

04.09.2024

Schlüsselwörter
specialization neurons transformer
Metrisch

Zusammenfassung

Transformer models are increasingly prevalent in various applications, yet our understanding of their internal workings remains limited.

This paper investigates the modularity and task specialization of neurons within transformer architectures, focusing on both vision (ViT) and language (Mistral 7B) models.

Using a combination of selective pruning and MoEfication clustering techniques, we analyze the overlap and specialization of neurons across different tasks and data subsets.

Our findings reveal evidence of task-specific neuron clusters, with varying degrees of overlap between related tasks.

We observe that neuron importance patterns persist to some extent even in randomly initialized models, suggesting an inherent structure that training refines.

Additionally, we find that neuron clusters identified through MoEfication correspond more strongly to task-specific neurons in earlier and later layers of the models.

This work contributes to a more nuanced understanding of transformer internals and offers insights into potential avenues for improving model interpretability and efficiency.

;Comment: 11 pages, 6 figures

Pochinkov, Nicholas,Jones, Thomas,Rahman, Mohammed Rashidur, 2024, Modularity in Transformers: Investigating Neuron Separability & Specialization

Dokumentieren

Öffnen

Teilen

Quelle

Artikel empfohlen von ES/IODE AI