Publications
Multi-modal, Multi-task, Music BERT: A Context-Aware Music Encoder Based on Transformers
Abstract
Computational machine intelligence approaches have enabled a variety of music-centric technologies in support of creating, sharing and interacting with music content. A strong performance on specific downstream application tasks, such as music genre detection and music emotion recognition, is paramount to ensuring broad capabilities for computational music understanding and Music Information Retrieval (MIR). Traditional approaches have relied on supervised learning to train models to support these music-related tasks. However, such approaches require copious annotated data and still may only provide insight into one view of music—namely, that related to the specific task at hand. We present a new model for supporting music understanding that leverages self-supervision and cross-domain learning. After pre-training using masked reconstruction and self-attention bi-directional transformers, the model is fine-tuned using several downstream music understanding tasks. The results show that our multi-modal, multi-task, music transformer model, which we call M3BERT, generates features that result in better performance on several music-related tasks, indicating the potential of self-supervised and semi-supervised learning approaches toward a more generalized and robust computational approach to modeling music. Our work can offer a starting point for many music-related modeling tasks, with potential applications in learning deep representations and enabling robust end technology applications.
Metadata
- publication
- year
- 2022
- publication date
- 2022/9/23
- authors
- Timothy Greer, Xuan Shi, Benjamin Ma, Shrikanth Narayanan
- link
- https://www.researchsquare.com/article/rs-2090671/latest
- resource_link
- https://www.researchsquare.com/article/rs-2090671/latest.pdf