Publications

Controlled Text Generation with Hidden Representation Transformations

Abstract

We propose CHRT (Control Hidden Representation Transformation) - a controlled language generation framework that steers large language models to generate text pertaining to certain attributes (such as toxicity). CHRT gains attribute control by modifying the hidden representation of the base model through learned transformations. We employ a contrastive-learning framework to learn these transformations that can be combined to gain multi-attribute control. The effectiveness of CHRT is experimentally shown by comparing it with seven baselines over three attributes. CHRT outperforms all the baselines in the task of detoxification, positive sentiment steering, and text simplification while minimizing the loss in linguistic qualities. Further, our approach has the lowest inference latency of only 0.01 seconds more than the base model, making it the most suitable for high-performance production environments. We open-source our code and release two novel datasets to further propel controlled language generation research.

Metadata

publication
2023 ACL Findings, 2023
year
2023
publication date
2023/5/30
authors
Vaibhav Kumar, Hana Koorehdavoudi, Masud Moshtaghi, Amita Misra, Ankit Chadha, Emilio Ferrara
link
https://arxiv.org/abs/2305.19230
resource_link
https://arxiv.org/pdf/2305.19230
conference
2023 ACL Findings
publisher
arXiv preprint arXiv:2305.19230