Publications
Audio-visual child-adult speaker classification in dyadic interactions
Abstract
Interactions involving children span a wide range of important domains from learning to clinical diagnostic and therapeutic contexts. Automated analyses of such interactions are motivated by the need to seek accurate insights and offer scale and robustness across diverse and wide-ranging conditions. Identifying the speech segments belonging to the child is a critical step in such modeling. Conventional child-adult speaker classification typically relies on audio modeling approaches, overlooking visual signals that convey speech articulation information, such as lip motion. Building on the foundation of an audio-only child-adult speaker classification pipeline, we propose incorporating visual cues through active speaker detection and visual processing models. Our framework involves video preprocessing, utterance-level child-adult speaker detection, and late fusion of modality-specific predictions. We demonstrate …
Metadata
- publication
- ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and …, 2024
- year
- 2024
- publication date
- 2024/4/14
- authors
- Anfeng Xu, Kevin Huang, Tiantian Feng, Helen Tager-Flusberg, Shrikanth Narayanan
- link
- https://ieeexplore.ieee.org/abstract/document/10447515/
- resource_link
- https://arxiv.org/pdf/2310.01867
- conference
- ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- pages
- 8090-8094
- publisher
- IEEE