Accéder directement au contenu Accéder directement à la navigation
Nouvelle interface
Pré-publication, Document de travail

LSG Attention: Extrapolation of pretrained Transformers to long sequences

Abstract : Transformer models achieve state-of-the-art performance on a wide range of NLP tasks. They however suffer from a prohibitive limitation due to the self-attention mechanism, inducing O(n2) complexity with regard to sequence length. To answer this limitation we introduce the LSG architecture which relies on Local, Sparse and Global attention. We show that LSG attention is fast, efficient and competitive in classification and summarization tasks on long documents. Interestingly, it can also be used to adapt existing pretrained models to efficiently extrapolate to longer sequences with no additional training. Along with the introduction of the LSG attention mechanism, we propose tools to train new models and adapt existing ones based on this mechanism.
Type de document :
Pré-publication, Document de travail
Liste complète des métadonnées

https://hal.mines-ales.fr/hal-03835159
Contributeur : Administrateur IMT - Mines Alès Connectez-vous pour contacter le contributeur
Soumis le : lundi 31 octobre 2022 - 14:05:18
Dernière modification le : mercredi 16 novembre 2022 - 15:34:06

Identifiants

Citation

Charles Condevaux, Sébastien Harispe. LSG Attention: Extrapolation of pretrained Transformers to long sequences. 2022. ⟨hal-03835159⟩

Partager

Métriques

Consultations de la notice

0