Publications

Estimating Large Language Model Capabilities without Labeled Test Data

Abstract

Large Language Models (LLMs) have the impressive ability to perform in-context learning (ICL) from only a few examples, but the success of ICL varies widely from task to task. Thus, it is important to quickly determine whether ICL is applicable to a new task, but directly evaluating ICL accuracy can be expensive in situations where test data is expensive to annotate--the exact situations where ICL is most appealing. In this paper, we propose the task of ICL accuracy estimation, in which we predict the accuracy of an LLM when doing in-context learning on a new task given only unlabeled test data for that task. To perform ICL accuracy estimation, we propose a method that trains a meta-model using LLM confidence scores as features. We compare our method to several strong accuracy estimation baselines on a new benchmark that covers 4 LLMs and 3 task collections. The meta-model improves over all baselines …

Metadata

publication
arXiv e-prints, arXiv: 2305.14802, 2023
year
2023
publication date
2023/5
authors
Harvey Yiyun Fu, Qinyuan Ye, Albert Xu, Xiang Ren, Robin Jia
link
https://ui.adsabs.harvard.edu/abs/2023arXiv230514802Y/abstract
journal
arXiv e-prints
pages
arXiv: 2305.14802