Music Information Retrieval

cassette tapes

Music is ubiquitous in today's world-almost everyone enjoys listening to music. With the rise of streaming platforms, the amount of music available has substantially increased. While users may seemingly benefit from this plethora of available music, at the same time, it has increasingly made it harder for users to explore new music and find songs they like. Personalized access to music libraries and music recommender systems aim to help users discover and retrieve music they like and enjoy. 

To this end, the field of Music Information Retrieval (MIR) strives to make music accessible to all by advancing retrieval applications such as music recommender systems, content-based search, the generation of personalized playlists, or user interfaces that allow to visually explore music collections. This includes gathering machine-readable musical data, the extraction of meaningful features, developing data representations based on these features, methodologies to process and understand that data. Retrieval approaches specifically leverage these representations for indexing music and providing search and retrieval services.

In our research, we develop methods for analyzing user music consumption behavior, investigate deep learning-based feature extraction methods for music content analysis, predicting the potential success and popularity of songs, and distilling sets of features that allow capturing user music preferences for retrieval tasks.

 

Public Datasets

For our research, we employ a variety of datasets that we have curated and utilized in our research and publications. We are happy to share the following datasets:

  • #nowplaying is a dataset that leverages Twitter for the creation of a diverse and constantly updated data set describing the music listening behavior of users. Twitter is frequently facilitated to post which music the respective user is currently listening to. From such tweets, we extract track and artist information and further metadata. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594482 (CC BY 4.0). Please cite this paper when using the dataset. Please cite this paper when using the dataset.
  • The #nowplaying-RS dataset features context- and content features of listening events. It contains 11.6 million music listening events of 139K users and 346K tracks collected from Twitter. The dataset comes with a rich set of item content features and user context features, as well as timestamps of the listening events. Moreover, some of the user context features imply the cultural origin of the users, and some others—like hashtags—give clues to the emotional state of a user underlying a listening event. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594537 (CC BY 4.0). Please cite this paper when using the dataset.
  • The Spotify playlists dataset is based on the subset of users in the #nowplaying dataset who publish their #nowplaying tweets via Spotify. In principle, the dataset holds users, their playlists, and the tracks contained in these playlists. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.2594556 (CC BY 4.0). Please cite this paper when using the dataset.
  • The Hit Song Prediction dataset features high- and low-level audio descriptors of the songs contained in the Million Song Dataset (extracted via Essentia) for content-based hit song prediction tasks. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.3258042 (CC BY 4.0). Please cite this paper when using the dataset.
  • The HSP-S and HSP-L datasets are based on data from AcousticBrainz, Billboard Hot 100, the Million Song Dataset, and last.fm. Both datasets contain audio features, Mel-spectrograms as well as streaming listener- and play-counts. The larger HSP-L dataset contains 73,482 songs, whereas the smaller HSP-S dataset contains 7,736 songs and additionally features Billboard Hot 100 chart measures. You can find the dataset on Zenodo: https://doi.org/10.5281/zenodo.5383858 (CC BY 4.0). Please cite this paper when using the dataset.
Photo by henry perks on Unsplash. 

Team

Current Theses

Currently running

Publications

2024

Bib Link

Emilia Parada-Cabaleiro, Maximilian Mayerl, Stefan Brandl, Marcin Skowron, Markus Schedl, Elisabeth Lex and Eva Zangerle: Song lyrics have become simpler and more repetitive over the last five decades. In Scientific Reports, vol. 14, no. 1, pages 5531. Nature Publishing Group UK London, 2024

2023

Bib Link Download

Michael Vötter, Maximilian Mayerl, Eva Zangerle and Günther Specht: Song Popularity Prediction using Ordinal Classification. In Proceedings of the 20th Sound and Music Computing Conference. June 15-17, 2023. Stockholm, Sweden. Royal College of Music and KTH Royal Institute of Technology, 2023

Bib Link Download

Maximilian Mayerl, Michael Vötter, Günther Specht and Eva Zangerle: Pairwise Learning to Rank for Hit Song Prediction. In BTW 2023. Gesellschaft für Informatik e.V., 2023

2022

Bib Download

Eva Zangerle: Recommender Systems for Music Retrieval Tasks. Habilitation Thesis, University of Innsbruck, 2022

Bib Link

Michael Vötter, Maximilian Mayerl, Günther Specht and Eva Zangerle: HSP Datasets: Insights on Song Popularity Prediction. In International Journal of Semantic Computing, pages 1-23. 2022 Publisher: World Scientific Publishing Co.

2021

Bib Link Download

Maximilian Mayerl, Michael Vötter, Andreas Peintner, Günther Specht and Eva Zangerle: Recognizing Song Mood and Theme: Clustering-based Ensembles. In Working Notes Proceedings of the MediaEval 2021 Workshop. ceur-ws.org, 2021

Bib Link Download

Michael Vötter, Maximilian Mayerl, Günther Specht and Eva Zangerle: Novel Datasets for Evaluating Song Popularity Prediction Tasks. In IEEE International Symposium on Multimedia, ISM 2021, Virtual Event, November 29 - December 1, 2021, pages 166-173. IEEE, 2021

Bib Link Download

Martin Pichl and Eva Zangerle: User models for multi-context-aware music recommendation. In Multimedia Tools and Applications, vol. 80, no. 15, pages 22509-22531. Springer, 2021

Bib Link Download

Eva Zangerle, Chih-Ming Chen, Ming-Feng Tsai and Yi-Hsuan Yang: Leveraging Affective Hashtags for Ranking Music Recommendations. In IEEE Transactions on Affective Computing, vol. 12, no. 1, pages 78-91. 2021

Bib Link Download

Dominik Kowald, Peter Muellner, Eva Zangerle, Christine Bauer, Markus Schedl and Elisabeth Lex: Support the underground: characteristics of beyond-mainstream music listeners. In EPJ Data Science, vol. 10, no. 1, pages 1-26. Springer, 2021

2020

Bib Link

Julie Cumming, Jin Ha Lee, Brian McFee, Markus Schedl, Johanna Devaney, Cory McKay, Eva Zangerle and Timothy de Reuse: Proceedings of the 21th International Society for Music Information Retrieval Conference, ISMIR 2020, Montreal, Canada, October 11-16, 2020. 

Bib Link Download

Eva Zangerle, Martin Pichl and Markus Schedl: User Models for Culture-Aware Music Recommendation: Fusing Acoustic and Cultural Cues. In Transactions of the International Society for Music Information Retrieval, vol. 3, no. 1. Ubiquity Press, 2020

Bib Link Download

Meijun Liu, Eva Zangerle, Xiao Hu, Alessandro Melchiorre and Markus Schedl: Pandemics, Music, and Collective Sentiment: Evidence from the Outbreak of COVID-19. In Proceedings of the 21st International Society for Music Information Retrieval Conference 2020 (ISMIR 2020), pages 157-165. 2020

Bib Link Download

Michael Vötter, Maximilian Mayerl, Günther Specht and Eva Zangerle: Recognizing Song Mood and Theme: Leveraging Ensembles of Tag Groups. In Working Notes Proceedings of the MediaEval 2020 Workshop. ceur-ws.org, 2020

Bib Link Download

Alessandro B. Melchiorre, Eva Zangerle and Markus Schedl: Personality Bias of Music Recommendation Algorithms. In 14th ACM Conference on Recommender Systems (RecSys 2020), pages 533–538. ACM, 2020.

Bib Link Download

Maximilian Mayerl, Michael Vötter, Manfred Moosleitner and Eva Zangerle: Comparing Lyrics Features for Genre Recognition. In Proceedings of the 1st Workshop on NLP for Music and Audio (NLP4MusA), pages 73-77. 2020.

2019

Bib Link Download

Eva Zangerle, Ramona Huber, Michael Vötter and Yi-Hsuan Yang: Hit Song Prediction: Leveraging Low- and High-Level Audio Features. In Proceedings of the 20th International Society for Music Information Retrieval Conference 2019 (ISMIR 2019), pages 319-326. 2019

Bib Link Download

Maximilian Mayerl, Michael Vötter, Eva Zangerle and Günther Specht: Language Models for Next-Track Music Recommendation. In Proceedings of the 31st GI-Workshop Grundlagen von Datenbanken, Saarburg, Germany, June 11-14, 2019., pages 15-19. 2019

Bib Link Download

Michael Vötter, Eva Zangerle, Maximilian Mayerl and Günther Specht: Autoencoders for Next-Track-Recommendation. In Proceedings of the 31st GI-Workshop Grundlagen von Datenbanken, Saarburg, Germany, June 11-14, 2019., pages 20-25. 2019

Bib Link Download

Maximilian Mayerl, Michael Vötter, Hsiao-Tzu Hung, Boyu Chen, Yi-Hsuan Yang and Eva Zangerle: Recognizing Song Mood and Theme Using Convolutional Recurrent Neural Networks. In Working Notes Proceedings of the MediaEval 2019 Workshop. ceur-ws.org, 2019.