Publications

ModalityMirror: Improving Audio Classification in Modality Heterogeneity Federated Learning with Multimodal Distillation

Abstract

Multimodal Federated Learning frequently encounters challenges of client modality heterogeneity, leading to undesired performances for secondary modality in multimodal learning. It is particularly prevalent in audiovisual learning, with audio is often assumed to be the weaker modality in recognition tasks. To address this challenge, we introduce ModalityMirror to improve audio model performance by leveraging knowledge distillation from an audiovisual federated learning model. ModalityMirror involves two phases: a modality-wise FL stage to aggregate uni-modal encoders; and a federated knowledge distillation stage on multi-modality clients to train an unimodal student model. Our results demonstrate that ModalityMirror significantly improves the audio classification compared to the state-of-the-art FL methods such as Harmony, particularly in audiovisual FL facing video missing. Our approach unlocks the potential for exploiting the diverse modality spectrum inherent in multi-modal FL.

Metadata

publication
arXiv preprint arXiv:2408.15803, 2024
year
2024
publication date
2024/8/28
authors
Tiantian Feng, Tuo Zhang, Salman Avestimehr, Shrikanth S Narayanan
link
https://arxiv.org/abs/2408.15803
resource_link
https://arxiv.org/pdf/2408.15803
journal
arXiv preprint arXiv:2408.15803