Publications

Attributing culture-conditioned generations to pretraining corpora

Abstract

In open-ended generative tasks like narrative writing or dialogue, large language models often exhibit cultural biases, showing limited knowledge and generating templated outputs for less prevalent cultures. Recent works show that these biases may stem from uneven cultural representation in pretraining corpora. This work investigates how pretraining leads to biased culture-conditioned generations by analyzing how models associate entities with cultures based on pretraining data patterns. We propose the MEMOed framework (MEMOrization from pretraining document) to determine whether a generation for a culture arises from memorization. Using MEMOed on culture-conditioned generations about food and clothing for 110 cultures, we find that high-frequency cultures in pretraining data yield more generations with memorized symbols, while some low-frequency cultures produce none. Additionally, the model favors generating entities with extraordinarily high frequency regardless of the conditioned culture, reflecting biases toward frequent pretraining terms irrespective of relevance. We hope that the MEMOed framework and our insights will inspire more works on attributing model performance on pretraining data.

Metadata

publication
arXiv preprint arXiv:2412.20760, 2024
year
2024
publication date
2024/12/30
authors
Huihan Li, Arnav Goel, Keyu He, Xiang Ren
link
https://arxiv.org/abs/2412.20760
resource_link
https://arxiv.org/pdf/2412.20760
journal
arXiv preprint arXiv:2412.20760