skip to main content
10.1145/3059336.3059340acmotherconferencesArticle/Chapter ViewAbstractPublication PagesismsiConference Proceedingsconference-collections
research-article

MoodyLyrics: A Sentiment Annotated Lyrics Dataset

Published:25 March 2017Publication History

ABSTRACT

Music emotion recognition and recommendations today are changing the way people find and listen to their preferred musical tracks. Emotion recognition of songs is mostly based on feature extraction and learning from available datasets. In this work we take a different approach utilizing content words of lyrics and their valence and arousal norms in affect lexicons only. We use this method to annotate each song with one of the four emotion categories of Russell's model, and also to construct MoodyLyrics, a large dataset of lyrics that will be available for public use. For evaluation we utilized another lyrics dataset as ground truth and achieved an accuracy of 74.25 %. Our results confirm that valence is a better discriminator of mood than arousal. The results also prove that music mood recognition or annotation can be achieved with good accuracy even without subjective human feedback or user tags, when they are not available.

References

  1. I. Bakker, T. van der Voordt, P. Vink, and J. de Boon. Pleasure, arousal, dominance: Mahrabian and russell revisited. Current Psychology, 33(3):405--421, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  2. M. M. Bradley and P. J. Lang. A_ective norms for English words (ANEW): Stimuli, instruction manual, and affective ratings. Technical report, Center for Research in Psychophysiology, University of Florida, Gainesville, Florida, 1999.Google ScholarGoogle Scholar
  3. E. Çano and M. Morisio. Characterization of public datasets for recommender systems. In Research and Technologies for Society and Industry Leveraging a better tomorrow (RTSI), 2015 IEEE 1st International Forum on, pages 249--257, Sept 2015.Google ScholarGoogle Scholar
  4. O. Celma. Foafing the music bridging the semantic gap in music recommendation. In 5th International Semantic Web Conference (ISWC), Athens, GA, USA, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Tavel, P. 2007. Modeling and Simulation Design. AK Peters Ltd., Natick, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. J. Dent, C. H. C. Leung, A. Milani, and L. Chen. Emotional states associated with music: Classification, prediction of changes, and consideration in recommendation. ACM Trans. Interact. Intell. Syst., 5(1):4:1--4:36, Mar. 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. P. Ekkekakis. Measurement in Sport and Exercise Psychology, chapter Affect, Mood, and Emotion. Human Kinetics, 2012.Google ScholarGoogle Scholar
  8. Z. Fu, G. Lu, K. M. Ting, and D. Zhang. A Survey of Audio-Based Music Classification and Annotation. Multimedia, IEEE Transactions on, 13(2):303--319, Apr. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. K. Hevner. Experimental studies of the elements of expression in music. The American Journal of Psychology, 48:246--268, 1936.Google ScholarGoogle ScholarCross RefCross Ref
  10. X. Hu and J. S. Downie. Exploring mood metadata: Relatonships with genre, artist and usage metadata. In proceedings of the 8th International Conference on Music Information Retrieval, pages 67--72, Vienna, Austria, September 23-27 2007.Google ScholarGoogle Scholar
  11. X. Hu and J. S. Downie. Improving mood classification in music digital libraries by combining lyrics and audio. In Proceedings of the 10the Annual Joint Conference of Digital Libraries, JCDL '10, pages 159--168, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. X. Hu and J. S. Downie. When lyrics outperform audio for music mood classification: A feature analysis. In proceedings of the 11th International Society for Music Information Retrieval Conference, pages 619--624, Utrecht, The Netherlands, August 9-13 2010.Google ScholarGoogle Scholar
  13. X. Hu and J. S. Downie, and A. F. Ehmann. Lyric text mining in music mood classification. In Proceedings of the 10th International Society for Music Information REtrieval Conference, pages 411--416, Kobe, Japan, October 26-30 2009.Google ScholarGoogle Scholar
  14. Y. Hu, X. Chen, and D. Yang. Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In ISMIR, pages 123--128, 2009.Google ScholarGoogle Scholar
  15. Y. Kim, E. Schmidt, and L. Emelle. Moodswings: A collaborative game for music mood label collection. In Proceedings of the 9th International Conference on Music Information Retrieval, pages 231--236, Philadelphia, USA, September 14-18 2008.Google ScholarGoogle Scholar
  16. P. Lamere. Social tagging and music information retrieval. Journal of New Music Research, 37(2):101--114, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  17. C. Laurier and P. Herrera. Audio music mood classification using support vector machine. In International Society for Music Information Research Conference (ISMIR), 2007.Google ScholarGoogle Scholar
  18. C. Laurier, M. Sordo, J. Serr_a, and P. Herrera. Music mood representations from social tags. In International Society for Music Information Retrieval (ISMIR) Conference, pages 381--386, Kobe, Japan, 26/10/2009 2009.Google ScholarGoogle Scholar
  19. J. H. Lee and X. Hu. Generating ground truth for music mood classification using mechanical turk. In Proceedings of the 12th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL '12, pages 129--138, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Y.-C. Lin, Y.-H. Yang, and H. H. Chen. Exploiting online music tags for music emotion classification. ACM Trans. Multimedia Comput. Commun. Appl., 7S(1):26:1--26:16, Nov. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. R. Malheiro, R. Panda, P. Gomes, and R. Paiva. Music emotion recognition from lyrics: A comparative study. In 6th International Workshop on Machine Learning and Music, n/a, 2013.Google ScholarGoogle Scholar
  22. R. Malheiro, R. Panda, P. Gomes, and R. Paiva. Bi-modal music emotion recognition: Novel lyrical features and dataset. In 9th International Workshop on Music and Machine Learning MML2016 in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases ECML/PKDD 2016, n/a, 2016.Google ScholarGoogle Scholar
  23. R. Malheiro, R. Panda, P. Gomes, and R. P. Paiva. Emotionally-relevant features for classi_cation and regression of music lyrics. IEEE Transactions on Affective Computing, PP(99):1--1, 2016.Google ScholarGoogle Scholar
  24. M. I. Mandel and D. P. W. Ellis. A web-based game for collecting music metadata. In S. Dixon, D. Bainbridge, and R. Typke, editors, Proceedings of the International Society for Music Information Retrieval conference, pages 365--366, Sept. 2007.Google ScholarGoogle Scholar
  25. G. A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39--41, Nov. 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. S. Oh, M. Hahn, and J. Kim. Music mood classi_cation using intro and refrain parts of lyrics. In 2013 International Conference on Information Science and Applications (ICISA), pages 1--3. IEEE, 2013.Google ScholarGoogle Scholar
  27. J. Russell. A circumplex model of a_ect. Journal of personality and social psychology, 39(6):1161--1178, 1980.Google ScholarGoogle Scholar
  28. M. Soleymani, M. N. Caro, E. M. Schmidt, C.-Y. Sha, and Y.-H. Yang. 1000 songs for emotional analysis of music. In Proceedings of the 2Nd ACM International Workshop on Crowdsourcing for Multimedia, CrowdMM '13, pages 1--6, New York, NY, USA, 2013. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. A. Speck, E. M. Schmidt, B. G. Morton, and Y. E. Kim. A comparative study of collaborative vs. traditional musical mood annotation. In A. Klapuri and C. Leider, editors, ISMIR, pages 549--554. University of Miami, 2011.Google ScholarGoogle Scholar
  30. J. A. Speck, E. M. Schmidt, B. G. Morton, and Y. E. Kim. A comparative study of collaborative vs. traditional musical mood annotation. In Proceedings of the 12th International Society for Music Information Retrieval Conference, pages 549--554, Miami (Florida), USA, October 24-28 2011.Google ScholarGoogle Scholar
  31. P. J. Stone and E. B. Hunt. A computer approach to content analysis: Studies using the general inquirer system. In Proceedings of the May 21-23, 1963, Spring Joint Computer Conference, AFIPS '63 (Spring), pages 241--256, New York, NY, USA, 1963. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. C. Strapparava and A. Valitutti. WordNet-Affect: An affective extension of WordNet. In Proceedings of the 4th International Conference on Language Resources and Evaluation, pages 1083--1086. ELRA, 2004.Google ScholarGoogle Scholar
  33. Y.-H. Yang and H. H. Chen. Machine recognition of music emotion: A review. ACM Trans. Intell. Syst. Technol., 3(3):40:1--40:30, May 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. v. Zaanen and P. Kanters. Automatic mood classi_cation using tf*idf based on lyrics. In Proceedings of the 11th International Society for Music Information Retrieval Conference, pages 75--80, Utrecht, The Netherlands, August 9-13 2010.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    ISMSI '17: Proceedings of the 2017 International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence
    March 2017
    171 pages
    ISBN:9781450347983
    DOI:10.1145/3059336

    Copyright © 2017 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 25 March 2017

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader