Recognizing Emotions in Music
The aim of this master thesis is to automatically recognize human-perceived emotions in music. We plan to achieve this goal by using a neural network. To properly train the network, we compile a sufficiently large dataset from various sources (AllMusic, Last.fm and Twitter) that provide these human-perceived emotions on song basis. The raw dataset is standardized and enhanced with metadata from MusicBrainz. Finally, YouTube URLs are added to the dataset. The audio or rather derived forms (Spectrogram, MFCC) of it are used as input to the neural network to solve this multi-class classification problem. The performance of the network is assessed by comparison with classical machine learning techniques that are used as baselines. The resulting network may further be used in a music recommendation system to improve its accuracy and thus user satisfaction.