skip to main content
10.1145/3242969.3243012acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals

Published:02 October 2018Publication History

ABSTRACT

Laughter is a highly spontaneous behavior that frequently occurs during social interactions. It serves as an expressive-communicative social signal which conveys a large spectrum of affect display. Even though many studies have been performed on the automatic recognition of laughter -- or emotion -- from audiovisual signals, very little is known about the automatic recognition of emotion conveyed by laughter. In this contribution, we provide insights on emotional laughter by extensive evaluations carried out on a corpus of dyadic spontaneous interactions, annotated with dimensional labels of emotion (arousal and valence). We evaluate, by automatic recognition experiments and correlation based analysis, how different categories of laughter, such as unvoiced laughter, voiced laughter, speech laughter, and speech (non-laughter) can be differentiated from audiovisual features, and to which extent they might convey different emotions. Results show that voiced laughter performed best in the automatic recognition of arousal and valence for both audio and visual features. The context of production is further analysed and results show that, acted and spontaneous expressions of laughter produced by a same person can be differentiated from audiovisual signals, and multilingual induced expressions can be differentiated from those produced during interactions.

References

  1. J. Bachorowski, M. Smoski, and M. Owren . 2001. The acoustic features of human laughter. The Journal of the Acoustical Society of America, Vol. 110, 3 (2001), 1581--1597.Google ScholarGoogle ScholarCross RefCross Ref
  2. J.-A. Bachorowski and M.J. Owren . 2001. Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect. Psychological Science Vol. 12, 3 (2001), 252--257.Google ScholarGoogle ScholarCross RefCross Ref
  3. T. Baltruvsaitis, M. Mahmoud, and P. Robinson . 2015. Cross-dataset learning and person-specific normalisation for automatic action unit detection. In Proc. 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. Vol. 6. IEEE, 1--6.Google ScholarGoogle Scholar
  4. S. Batliner, A.and Steidl, F. Eyben, and B. Schuller . 2011. On laughter and speech laugh, based on observations of child-robot interaction. The phonetics of laughing (2011).Google ScholarGoogle Scholar
  5. L.S. Berk, S.A. Tan, B.J. Napier, and W.C. Eby . 1989. Eustress of mirthful laughter modifies natural-killer cell activity Proc. Clinical Research, Vol. Vol. 37. 115A.Google ScholarGoogle Scholar
  6. P. Boersma and D. Weenink . 2005. Praat: doing phonetics by computer (version 4.3.01) Technical Report. www.praat.org.Google ScholarGoogle Scholar
  7. D. Bone, C.-C. Lee, and S.S. Narayanan . 2014. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features. IEEE Transactions on Affective Computing Vol. 5, 2 (April-June . 2014), 201--213.Google ScholarGoogle ScholarCross RefCross Ref
  8. H. Brugman and A. Russel . 2013. Annotating multimedia/multi-modal resources with ELAN Proc. International Conference on Language Resources and Evaluation. ELRA, 2065--2068.Google ScholarGoogle Scholar
  9. W.L. Chafe . 2007. The importance of not being earnest: The feeling behind laughter and humor. Vol. Vol. 3. John Benjamins Publishing.Google ScholarGoogle Scholar
  10. A.J. Chapman . 1976. Social aspects of humourous laughter. Humour and laughter: Theory, research and applications (1976), 155--185.Google ScholarGoogle Scholar
  11. S. Cosentino, S. Sessa, and A. Takanishi . 2016. Quantitative laughter detection, measurement, and classification -- A Critical Survey. IEEE Reviews in Biomedical engineering Vol. 9 (2016), 148--162.Google ScholarGoogle Scholar
  12. C. Darwin, P. Ekman, and P. Prodger . 1998. The expression of the emotions in man and animals. Oxford University Press, USA.Google ScholarGoogle Scholar
  13. L. Devillers and L. Vidrascu . 2007. Positive and negative emotional states behind the laughs in spontaneous spoken dialogs Proc. Interdisciplinary workshop on the phonetics of laughter. 37.Google ScholarGoogle Scholar
  14. P. Ekman . 1997. What we have learned by measuring facial behavior. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS) (1997), 469--485.Google ScholarGoogle Scholar
  15. P. Ekman, R.J. Davidson, and W.V. Friesen . 1990. The Duchenne smile: Emotional expression and brain physiology: II. Journal of personality and social psychology, Vol. 58, 2 (1990), 342.Google ScholarGoogle ScholarCross RefCross Ref
  16. P. Ekman, W. V. Friesen, and J.C. Hager . 2002. Facial action coding system. Salt Lake City, UT: Research Nexus.Google ScholarGoogle Scholar
  17. F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. André, C. Busso, L. Devillers, J. Epps, P. Laukka, S. Narayanan, and K. Truong . 2015. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing. IEEE Transactions on Affective Computing (2015). in press.Google ScholarGoogle Scholar
  18. F. Eyben, F. Weninger, F. Gross, and B. Schuller . 2013. Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor Proc. ACM Multimedia (MM). ACM, 835--838. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin . 2008. LIBLINEAR: A library for large linear classification. Journal of machine learning research Vol. 9, Aug (2008), 1871--1874. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Glenn . 2003. Laughter in interaction. Vol. Vol. 18. Cambridge University Press.Google ScholarGoogle Scholar
  21. J. Hall and W.H. Watson . 1970. The effects of a normative intervention on group decision-making performance. Human relations, Vol. 23, 4 (1970), 299--317.Google ScholarGoogle Scholar
  22. A. Hanjalic and L. Xu . 2001. User-oriented affective video content analysis. In IEEE Workshop on Content-Based Access of Image and Video Libraries, 2001.(CBAIVL 2001). IEEE, 50--57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. W. Hudenko, W. Stone, and J.-A. Bachorowski . 2009. Laughter differs in children with autism: An acoustic analysis of laughs produced by children with and without the disorder. Journal of autism and developmental disorders, Vol. 39, 10 (2009), 1392--1400.Google ScholarGoogle ScholarCross RefCross Ref
  24. K. Laskowski . 2009. Contrasting emotion-bearing laughter types in multiparticipant vocal activity detection for meetings. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE, 4765--4768. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. K. Laskowski and S. Burger . 2007. Analysis of the occurrence of laughter in meetings. Proc. INTERSPEECH. 1258--1261.Google ScholarGoogle Scholar
  26. K. Laskowski and T. Schultz . 2008. Detection of laughter-in-interaction in multichannel close-talk microphone recordings of meetings. Machine Learning for Multimodal Interaction (2008), 149--160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. F. Lingenfelser, J. Wagner, El. André, G. McKeown, and W. Curran . 2014. An event driven fusion approach for enjoyment recognition in real-time Proc. of the 22nd ACM international conference on Multimedia. ACM, 377--386. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. R.A. Martin . 2001. Humor, laughter, and physical health: Methodological issues and research findings. Psychological bulletin Vol. 127, 4 (2001), 504.Google ScholarGoogle Scholar
  29. G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, and E. André . 2015. The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation. In Proc. 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015, Xi'an, China, September 21--24, 2015. IEEE Computer Society, 166--172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schröder . 2012. The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing Vol. 3, 1 (2012), 5--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. C.C. Neuhoff and C. Schaefer . 2002. Effects of laughing, smiling, and howling on mood. Psychological Reports Vol. 91, 3_suppl (2002), 1079--1080.Google ScholarGoogle Scholar
  32. R. Niewiadomski, M. Mancini, T. Baur, G. Varni, H. Griffin, and M.S.H. Aung . 2013. MMLI: Multimodal multiperson corpus of laughter in interaction Proc. International Workshop on Human Behavior Understanding. Springer, 184--195. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. E. Nwokah, H.-C. Hsu, P. Davies, and A. Fogel . 1999. The integration of laughter and speech in vocal communication: A dynamic systems perspective. Journal of Speech, Language, and Hearing Research, Vol. 42, 4 (1999), 880--894.Google ScholarGoogle ScholarCross RefCross Ref
  34. F. Orozco, F. Garc'ıa, L. Arcos, and J. Gonzàlez . 2007. Spatio-temporal reasoning for reliable facial expression interpretation Proc. International Conference on Computer Vision Systems (ICVS). Bielefeld University.Google ScholarGoogle Scholar
  35. S. Petridis, M. Leveque, and M. Pantic . 2013 a. Audiovisual detection of laughter in human-machine interaction Proc. 5th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 129--134. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. S. Petridis, B. Mart'ınez, and M. Pantic . 2013 b. The MAHNOB laughter database. Image and Vision Computing Vol. 31, 2 (2013), 186--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. S. Petridis and M. Pantic . 2008. Audiovisual laughter detection based on temporal features Proc. of the 10th international conference on Multimodal interfaces. ACM, 37--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. S. Petridis and M. Pantic . 2011. Audiovisual discrimination between speech and laughter: Why and when visual information might help. IEEE Transactions on Multimedia Vol. 13, 2 (2011), 216--234. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. R.R. Provine . 2001. Laughter: A scientific investigation. Penguin.Google ScholarGoogle Scholar
  40. F. Ringeval, S. Amiriparian, F. Eyben, K.Scherer, and B. Schuller . 2014. Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion. In Proc. of EmotiW, ICMI. ACM, Istanbul, Turkey, 473--480. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller . 2015 a. Prediction of Asynchronous Dimensional Emotion Ratings from Audiovisual and Physiological Data. Pattern Recognition Letters Vol. 66 (November . 2015), 22--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic . 2015 b. AvGoogle ScholarGoogle Scholar
  43. ec 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proc. of the 5th International Workshop on Audio/Visual Emotion Challenge. ACM, 3--8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne . 2013. Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions. In Proc. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  45. W. Ruch and P. Ekman . 2001. The expressive pattern of laughter. Emotion, qualia, and consciousness (2001), 426--443.Google ScholarGoogle Scholar
  46. S. Scherer, F. Schwenker, N. Campbell, and G. Palm . 2009. Multimodal laughter detection in natural discourses. Human Centered Robot Systems. Springer, 111--120.Google ScholarGoogle Scholar
  47. M. Schröder . 2003. Experimental study of affect bursts. Speech communication, Vol. 40, 1 (2003), 99--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. B. Schuller . 2012. The computational paralinguistics challenge {social sciences}. IEEE Signal Processing Magazine Vol. 29, 4 (2012), 97--101.Google ScholarGoogle ScholarCross RefCross Ref
  49. I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty . 2012. The Belfast induced natural emotion database. IEEE Transactions on Affective Computing Vol. 3, 1 (2012), 32--41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. R. Stibbard . 2000. Automated extraction of ToBI annotation data from the Reading/Leeds emotional speech corpus Proc. ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion.Google ScholarGoogle Scholar
  51. M. T. Suarez, J. Cu, and M. Sta . 2012. Building a Multimodal Laughter Database for Emotion Recognition. Proc. LREC. 2347--2350.Google ScholarGoogle Scholar
  52. D.P. Szameitat, K. Alter, A.J. Szameitat, C.J. Darwin, D. Wildgruber, S. Dietrich, and A. Sterr . 2009. Differentiation of emotions in laughter at the behavioral level. Emotion, Vol. 9, 3 (2009), 397.Google ScholarGoogle ScholarCross RefCross Ref
  53. J. Trouvain . 2001. Phonetic Aspects of Speech-Laughs. In Proc. of the Conference on Orality & Gestuality (ORAGE). 634--639.Google ScholarGoogle Scholar
  54. J. Trouvain . 2003. Segmenting Phonetic Units in Laughter. In Proc. of the 15th International Conference of Phonetic Sciences. Barcelona Spain, 2793--2796.Google ScholarGoogle Scholar
  55. K. Truong and J. Trouvain . 2012. Laughter annotations in conversational speech corpora-possibilities and limitations for phonetic analysis. Proceedings of the 4th International Worskhop on Corpora for Research on Emotion Sentiment and Social Signals (2012), 20--24.Google ScholarGoogle Scholar
  56. J. Urbain . 2014. Acoustic Laughter Processingn. Ph.D. Dissertation. bibinfoschoolUniversity of Mons.Google ScholarGoogle Scholar
  57. J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner . 2010. The AVLaughterCycle Database.. In Proc. LREC.Google ScholarGoogle Scholar
  58. M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. Torres Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic . 2016. AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge Proc. of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, 3--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben, J. Krajewski, R. Cowie, and M. Pantic . 2014. AVEC 2014 -- The Three Dimensional Affect and Depression Challenge Proc. of ACM MM. Orlando (FL), USA. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal Interaction
        October 2018
        687 pages
        ISBN:9781450356923
        DOI:10.1145/3242969

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 2 October 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ICMI '18 Paper Acceptance Rate63of149submissions,42%Overall Acceptance Rate453of1,080submissions,42%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader