Publications

How predictable are large language model capabilities? a case study on big-bench

Abstract

We investigate the predictability of large language model (LLM) capabilities: given records of past experiments using different model families, numbers of parameters, tasks, and numbers of in-context examples, can we accurately predict LLM performance on new experiment configurations? Answering this question has practical implications for LLM users (e.g., deciding which models to try), developers (e.g., prioritizing evaluation on representative tasks), and the research community (e.g., identifying hard-to-predict capabilities that warrant further investigation). We study the performance prediction problem on experiment records from BIG-bench. On a random train-test split, an MLP-based predictor achieves an score greater than 95%, indicating the presence of learnable patterns within the experiment records. We then formulate the problem of searching for "small-bench," an informative subset of BIG-bench tasks from which the performance on the full set can be maximally recovered. We find a subset as informative as BIG-bench Hard for evaluating new model families, while being smaller. Additionally, we find competitive subsets by clustering task representations learned by our MLP-based predictor and selecting tasks close to cluster centroids, highlighting the importance of task diversity in constructing "small-bench."

Metadata

publication
arXiv preprint arXiv:2305.14947, 2023
year
2023
publication date
2023/5/24
authors
Qinyuan Ye, Harvey Yiyun Fu, Xiang Ren, Robin Jia
link
https://arxiv.org/abs/2305.14947
resource_link
https://arxiv.org/pdf/2305.14947
journal
arXiv preprint arXiv:2305.14947