Publications

Towards disentangling the contributions of articulation and acoustics in multimodal phoneme recognition

Abstract

Although many previous studies have carried out multimodal learning with real-time MRI data that captures the audio-visual kinematics of the vocal tract during speech, these studies have been limited by their reliance on multi-speaker corpora. This prevents such models from learning a detailed relationship between acoustics and articulation due to considerable cross-speaker variability. In this study, we develop unimodal audio and video models as well as multimodal models for phoneme recognition using a long-form single-speaker MRI corpus, with the goal of disentangling and interpreting the contributions of each modality. Audio and multimodal models show similar performance on different phonetic manner classes but diverge on places of articulation. Interpretation of the models' latent space shows similar encoding of the phonetic space across audio and multimodal models, while the models' attention weights highlight differences in acoustic and articulatory timing for certain phonemes.

Metadata

publication
arXiv preprint arXiv:2505.24059, 2025
year
2025
publication date
2025/5/29
authors
Sean Foley, Hong Nguyen, Jihwan Lee, Sudarsana Reddy Kadiri, Dani Byrd, Louis Goldstein, Shrikanth Narayanan
link
https://arxiv.org/abs/2505.24059
resource_link
https://arxiv.org/pdf/2505.24059
journal
arXiv preprint arXiv:2505.24059