Publications

Machine Translation Robustness to Natural Asemantic Variation

Abstract

Current Machine Translation (MT) models still struggle with more challenging input, such as noisy data and tail-end words and phrases. Several works have addressed this robustness issue by identifying specific categories of noise and variation then tuning models to perform better on them. An important yet under-studied category involves minor variations in nuance (non-typos) that preserve meaning w.r.t. the target language. We introduce and formalize this category as Natural Asemantic Variation (NAV) and investigate it in the context of MT robustness. We find that existing MT models fail when presented with NAV data, but we demonstrate strategies to improve performance on NAV by fine-tuning them with human-generated variations. We also show that NAV robustness can be transferred across languages and find that synthetic perturbations can achieve some but not all of the benefits of organic NAV data.

Metadata

publication
arXiv preprint arXiv:2205.12514, 2022
year
2022
publication date
2022/5/25
authors
Jacob Bremerman, Xiang Ren, Jonathan May
link
https://arxiv.org/abs/2205.12514
resource_link
https://arxiv.org/pdf/2205.12514
journal
arXiv preprint arXiv:2205.12514