Stacked convolutional and recurrent neural networks for music emotion recognition

Miroslav Malik, Sharath Adavanne, Konstantinos Drossos, Tuomas Virtanen, Dasa Ticha, Roman Jarina

    Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

    43 Downloads (Pure)


    This paper studies the emotion recognition from musical tracks in the 2-dimensional valence-arousal (V-A) emotional space. We propose a method based on convolutional (CNN) and recurrent neural networks (RNN), having significantly fewer parameters compared with the state-of-the-art method for the same task. We utilize one CNN layer followed by two branches of RNNs trained separately for arousal and valence. The method was evaluated using the 'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for arousal and 0.268 for valence, which is the best result reported on this dataset.
    Original languageEnglish
    Title of host publicationProceedings of the 14th Sound and Music Computing Conference 2017
    PublisherAalto University
    ISBN (Electronic)978-952-60-3729-5
    Publication statusPublished - 2017
    Publication typeA4 Article in a conference publication
    EventSound and Music Computing Conference -
    Duration: 1 Jan 2000 → …

    Publication series

    ISSN (Electronic)2518-3672


    ConferenceSound and Music Computing Conference
    Period1/01/00 → …

    Publication forum classification

    • Publication forum level 1

    Fingerprint Dive into the research topics of 'Stacked convolutional and recurrent neural networks for music emotion recognition'. Together they form a unique fingerprint.

    Cite this