Publications

CoKe: Customizable Fine-Grained Story Evaluation via Chain-of-Keyword Rationalization

Abstract

Evaluating creative text such as human-written stories using language models has always been a challenging task -- owing to the subjectivity of multi-annotator ratings. To mimic the thinking process of humans, chain of thought (CoT) generates free-text explanations that help guide a model's predictions and Self-Consistency (SC) marginalizes predictions over multiple generated explanations. In this study, we discover that the widely-used self-consistency reasoning methods cause suboptimal results due to an objective mismatch between generating 'fluent-looking' explanations vs. actually leading to a good rating prediction for an aspect of a story. To overcome this challenge, we propose hain-f-ywords (CoKe), that generates a sequence of keywords generating a free-text rationale, that guide the rating prediction of our evaluation language model. Then, we generate a diverse set of such keywords, and aggregate the scores corresponding to these generations. On the StoryER dataset, CoKe based on our small fine-tuned evaluation models not only reach human-level performance and significantly outperform GPT-4 with a 2x boost in correlation with human annotators, but also requires drastically less number of parameters.

Metadata

publication
arXiv preprint arXiv:2503.17136, 2025
year
2025
publication date
2025/3/21
authors
Brihi Joshi, Sriram Venkatapathy, Mohit Bansal, Nanyun Peng, Haw-Shiuan Chang
link
https://arxiv.org/abs/2503.17136
resource_link
https://arxiv.org/pdf/2503.17136
journal
arXiv preprint arXiv:2503.17136