Detailed info
Enriched Music Representations With Multiple Cross-Modal Contrastive Learning
Authors: | Andres Ferraro, Xavier Favory, Konstantinos Drossos, Yuntae Kim, Dmitry Bogdanov |
Title: | Enriched Music Representations With Multiple Cross-Modal Contrastive Learning |
Abstract: | Modeling various aspects that make a music piece unique is a challenging task, requiring the combination of multiple sources of information. Deep learning is commonly used to obtain representations using various sources of information, such as the audio, interactions between users and songs, or associated genre metadata. Recently, contrastive learning has led to representations that generalize better compared to traditional supervised methods. In this paper, we present a novel approach that combines multiple types of information related to music using cross-modal contrastive learning, allowing us to learn an audio feature from heterogeneous data simultaneously. We align the latent representations obtained from playlists-track interactions, genre metadata, and the tracks’ audio, by maximizing the agreement between these modality representations using a contrastive loss. We evaluate our approach in three tasks, namely, genre classification, playlist continuation and automatic tagging. We compare the performances with a baseline audio-based CNN trained to predict these modalities. We also study the importance of including multiple sources of information when training our embedding model. The results suggest that the proposed method outperforms the baseline in all the three downstream tasks and achieves comparable performance to the state-of-the-art. |
Publication type: | Journal |
Title of the journal: | IEEE Signal Processing Letters |
Year of Publication | 2021 |
Pages: | 733-737 |
Number, date or frequency of the Journal: | 29 |
Publisher: | IEEE |
Url: | https://zenodo.org/record/5723442#.YZ_nDrqxVPZ |
DOI: | 10.1109/LSP.2021.3071082 |
Menu
- Home
- About
- Experimentation
- Knowledge Hub
- ContactResults
- News & Events
- Contact
Funding
This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under grant agreement No 957337. The website reflects only the view of the author(s) and the Commission is not responsible for any use that may be made of the information it contains.