Document detail
ID

oai:arXiv.org:2409.11250

Topic
Computer Science - Computation and...
Author
Clark, Christian Oh, Byung-Doh Schuler, William
Category

Computer Science

Year

2024

listing date

9/25/2024

Keywords
models transformers language attention human alibi
Metrics

Abstract

Recent psycholinguistic research has compared human reading times to surprisal estimates from language models to study the factors shaping human sentence processing difficulty.

Previous studies have shown a strong fit between surprisal values from Transformers and reading times.

However, standard Transformers work with a lossless representation of the entire previous linguistic context, unlike models of human language processing that include memory decay.

To bridge this gap, this paper evaluates a modification of the Transformer model that uses ALiBi (Press et al., 2022), a recency bias added to attention scores.

Surprisal estimates with ALiBi show an improved fit to human reading times compared to a standard Transformer baseline.

A subsequent analysis of attention heads suggests that ALiBi's mixture of slopes -- which determine the rate of memory decay in each attention head -- may play a role in the improvement by helping models with ALiBi to track different kinds of linguistic dependencies.

Clark, Christian,Oh, Byung-Doh,Schuler, William, 2024, Linear Recency Bias During Training Improves Transformers' Fit to Reading Times

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Bone metastasis prediction in non-small-cell lung cancer: primary CT-based radiomics signature and clinical feature
non-small-cell lung cancer bone metastasis radiomics risk factor predict cohort model cect cancer prediction 0 metastasis radiomics clinical