Publications

Eliciting and understanding cross-task skills with task-level mixture-of-experts

Abstract

Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapting to new tasks efficiently. However, the potential of these multi-task models may be limited as they use the same set of parameters for all tasks. In contrast, humans tackle tasks in a more flexible way, by making proper presumptions on what skills and knowledge are relevant and executing only the necessary computations. Inspired by this, we propose to use task-level mixture-of-expert models, which has a collection of transformer layers (i.e., experts) and a router component that chooses from these experts dynamically and flexibly. We find that these models help improve the average performance gain (ARG) metric by 2.6% when adapting to unseen tasks in the few-shot setting and by 5.6% in the zero-shot generalization setting. Further, we show that the learned routing decisions partly rediscover human categorization of NLP tasks -- certain experts are strongly associated with extractive tasks, some with classification tasks, and some with tasks requiring world knowledge.

Metadata

publication
arXiv preprint arXiv:2205.12701, 2022
year
2022
publication date
2022/5/25
authors
Qinyuan Ye, Juan Zha, Xiang Ren
link
https://arxiv.org/abs/2205.12701
resource_link
https://arxiv.org/pdf/2205.12701
journal
arXiv preprint arXiv:2205.12701