Publications
Parameter-Efficient Tuning with Special Token Adaptation
Abstract
Parameter-efficient tuning aims at updating only a small subset of parameters when adapting a pretrained model to downstream tasks. In this work, we introduce PASTA, in which we only modify the special token representations (e.g., [SEP] and [CLS] in BERT) before the self-attention module at each layer in Transformer-based models. PASTA achieves comparable performance to full finetuning in natural language understanding tasks including text classification and NER with up to only 0.029% of total parameters trained. Our work not only provides a simple yet effective way of parameter-efficient tuning, which has a wide range of practical applications when deploying finetuned models for multiple tasks, but also demonstrates the pivotal role of special tokens in pretrained language models
Metadata
- publication
- EACL 2023, 2022
- year
- 2022
- publication date
- 2022/10/10
- authors
- Xiaoocong Yang, James Y Huang, Wenxuan Zhou, Muhao Chen
- link
- https://arxiv.org/abs/2210.04382
- resource_link
- https://arxiv.org/pdf/2210.04382
- journal
- EACL 2023