Publications
Handling Syntactic Divergence in Low-resource Machine Translation
Abstract
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs. Data augmentation methods such as back-translation make it possible to use monolingual data to help alleviate these issues, but back-translation itself fails in extreme low-resource scenarios, especially for syntactically divergent languages. In this paper, we propose a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision. Experiments with simulated low-resource Japanese-to-English, and real low-resource Uyghur-to-English scenarios find significant improvements over other semi-supervised alternatives.
Metadata
- publication
- Proceedings of the 2019 Conference on Empirical Methods in Natural Language …, 2019
- year
- 2019
- publication date
- 2019/8/30
- authors
- Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig
- link
- https://arxiv.org/abs/1909.00040
- resource_link
- https://arxiv.org/pdf/1909.00040
- conference
- Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP 2019)