ABSTRACT
Laughter is a highly spontaneous behavior that frequently occurs during social interactions. It serves as an expressive-communicative social signal which conveys a large spectrum of affect display. Even though many studies have been performed on the automatic recognition of laughter -- or emotion -- from audiovisual signals, very little is known about the automatic recognition of emotion conveyed by laughter. In this contribution, we provide insights on emotional laughter by extensive evaluations carried out on a corpus of dyadic spontaneous interactions, annotated with dimensional labels of emotion (arousal and valence). We evaluate, by automatic recognition experiments and correlation based analysis, how different categories of laughter, such as unvoiced laughter, voiced laughter, speech laughter, and speech (non-laughter) can be differentiated from audiovisual features, and to which extent they might convey different emotions. Results show that voiced laughter performed best in the automatic recognition of arousal and valence for both audio and visual features. The context of production is further analysed and results show that, acted and spontaneous expressions of laughter produced by a same person can be differentiated from audiovisual signals, and multilingual induced expressions can be differentiated from those produced during interactions.
- J. Bachorowski, M. Smoski, and M. Owren . 2001. The acoustic features of human laughter. The Journal of the Acoustical Society of America, Vol. 110, 3 (2001), 1581--1597.Google ScholarCross Ref
- J.-A. Bachorowski and M.J. Owren . 2001. Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect. Psychological Science Vol. 12, 3 (2001), 252--257.Google ScholarCross Ref
- T. Baltruvsaitis, M. Mahmoud, and P. Robinson . 2015. Cross-dataset learning and person-specific normalisation for automatic action unit detection. In Proc. 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Vol. Vol. 6. IEEE, 1--6.Google Scholar
- S. Batliner, A.and Steidl, F. Eyben, and B. Schuller . 2011. On laughter and speech laugh, based on observations of child-robot interaction. The phonetics of laughing (2011).Google Scholar
- L.S. Berk, S.A. Tan, B.J. Napier, and W.C. Eby . 1989. Eustress of mirthful laughter modifies natural-killer cell activity Proc. Clinical Research, Vol. Vol. 37. 115A.Google Scholar
- P. Boersma and D. Weenink . 2005. Praat: doing phonetics by computer (version 4.3.01) Technical Report. www.praat.org.Google Scholar
- D. Bone, C.-C. Lee, and S.S. Narayanan . 2014. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features. IEEE Transactions on Affective Computing Vol. 5, 2 (April-June . 2014), 201--213.Google ScholarCross Ref
- H. Brugman and A. Russel . 2013. Annotating multimedia/multi-modal resources with ELAN Proc. International Conference on Language Resources and Evaluation. ELRA, 2065--2068.Google Scholar
- W.L. Chafe . 2007. The importance of not being earnest: The feeling behind laughter and humor. Vol. Vol. 3. John Benjamins Publishing.Google Scholar
- A.J. Chapman . 1976. Social aspects of humourous laughter. Humour and laughter: Theory, research and applications (1976), 155--185.Google Scholar
- S. Cosentino, S. Sessa, and A. Takanishi . 2016. Quantitative laughter detection, measurement, and classification -- A Critical Survey. IEEE Reviews in Biomedical engineering Vol. 9 (2016), 148--162.Google Scholar
- C. Darwin, P. Ekman, and P. Prodger . 1998. The expression of the emotions in man and animals. Oxford University Press, USA.Google Scholar
- L. Devillers and L. Vidrascu . 2007. Positive and negative emotional states behind the laughs in spontaneous spoken dialogs Proc. Interdisciplinary workshop on the phonetics of laughter. 37.Google Scholar
- P. Ekman . 1997. What we have learned by measuring facial behavior. What the face reveals: Basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS) (1997), 469--485.Google Scholar
- P. Ekman, R.J. Davidson, and W.V. Friesen . 1990. The Duchenne smile: Emotional expression and brain physiology: II. Journal of personality and social psychology, Vol. 58, 2 (1990), 342.Google ScholarCross Ref
- P. Ekman, W. V. Friesen, and J.C. Hager . 2002. Facial action coding system. Salt Lake City, UT: Research Nexus.Google Scholar
- F. Eyben, K. Scherer, B. Schuller, J. Sundberg, E. André, C. Busso, L. Devillers, J. Epps, P. Laukka, S. Narayanan, and K. Truong . 2015. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing. IEEE Transactions on Affective Computing (2015). in press.Google Scholar
- F. Eyben, F. Weninger, F. Gross, and B. Schuller . 2013. Recent Developments in openSMILE, the Munich Open-Source Multimedia Feature Extractor Proc. ACM Multimedia (MM). ACM, 835--838. Google ScholarDigital Library
- R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin . 2008. LIBLINEAR: A library for large linear classification. Journal of machine learning research Vol. 9, Aug (2008), 1871--1874. Google ScholarDigital Library
- P. Glenn . 2003. Laughter in interaction. Vol. Vol. 18. Cambridge University Press.Google Scholar
- J. Hall and W.H. Watson . 1970. The effects of a normative intervention on group decision-making performance. Human relations, Vol. 23, 4 (1970), 299--317.Google Scholar
- A. Hanjalic and L. Xu . 2001. User-oriented affective video content analysis. In IEEE Workshop on Content-Based Access of Image and Video Libraries, 2001.(CBAIVL 2001). IEEE, 50--57. Google ScholarDigital Library
- W. Hudenko, W. Stone, and J.-A. Bachorowski . 2009. Laughter differs in children with autism: An acoustic analysis of laughs produced by children with and without the disorder. Journal of autism and developmental disorders, Vol. 39, 10 (2009), 1392--1400.Google ScholarCross Ref
- K. Laskowski . 2009. Contrasting emotion-bearing laughter types in multiparticipant vocal activity detection for meetings. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE, 4765--4768. Google ScholarDigital Library
- K. Laskowski and S. Burger . 2007. Analysis of the occurrence of laughter in meetings. Proc. INTERSPEECH. 1258--1261.Google Scholar
- K. Laskowski and T. Schultz . 2008. Detection of laughter-in-interaction in multichannel close-talk microphone recordings of meetings. Machine Learning for Multimodal Interaction (2008), 149--160. Google ScholarDigital Library
- F. Lingenfelser, J. Wagner, El. André, G. McKeown, and W. Curran . 2014. An event driven fusion approach for enjoyment recognition in real-time Proc. of the 22nd ACM international conference on Multimedia. ACM, 377--386. Google ScholarDigital Library
- R.A. Martin . 2001. Humor, laughter, and physical health: Methodological issues and research findings. Psychological bulletin Vol. 127, 4 (2001), 504.Google Scholar
- G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, and E. André . 2015. The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation. In Proc. 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015, Xi'an, China, September 21--24, 2015. IEEE Computer Society, 166--172. Google ScholarDigital Library
- G. McKeown, M. Valstar, R. Cowie, M. Pantic, and M. Schröder . 2012. The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing Vol. 3, 1 (2012), 5--17. Google ScholarDigital Library
- C.C. Neuhoff and C. Schaefer . 2002. Effects of laughing, smiling, and howling on mood. Psychological Reports Vol. 91, 3_suppl (2002), 1079--1080.Google Scholar
- R. Niewiadomski, M. Mancini, T. Baur, G. Varni, H. Griffin, and M.S.H. Aung . 2013. MMLI: Multimodal multiperson corpus of laughter in interaction Proc. International Workshop on Human Behavior Understanding. Springer, 184--195. Google ScholarDigital Library
- E. Nwokah, H.-C. Hsu, P. Davies, and A. Fogel . 1999. The integration of laughter and speech in vocal communication: A dynamic systems perspective. Journal of Speech, Language, and Hearing Research, Vol. 42, 4 (1999), 880--894.Google ScholarCross Ref
- F. Orozco, F. Garc'ıa, L. Arcos, and J. Gonzàlez . 2007. Spatio-temporal reasoning for reliable facial expression interpretation Proc. International Conference on Computer Vision Systems (ICVS). Bielefeld University.Google Scholar
- S. Petridis, M. Leveque, and M. Pantic . 2013 a. Audiovisual detection of laughter in human-machine interaction Proc. 5th International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 129--134. Google ScholarDigital Library
- S. Petridis, B. Mart'ınez, and M. Pantic . 2013 b. The MAHNOB laughter database. Image and Vision Computing Vol. 31, 2 (2013), 186--202. Google ScholarDigital Library
- S. Petridis and M. Pantic . 2008. Audiovisual laughter detection based on temporal features Proc. of the 10th international conference on Multimodal interfaces. ACM, 37--44. Google ScholarDigital Library
- S. Petridis and M. Pantic . 2011. Audiovisual discrimination between speech and laughter: Why and when visual information might help. IEEE Transactions on Multimedia Vol. 13, 2 (2011), 216--234. Google ScholarDigital Library
- R.R. Provine . 2001. Laughter: A scientific investigation. Penguin.Google Scholar
- F. Ringeval, S. Amiriparian, F. Eyben, K.Scherer, and B. Schuller . 2014. Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion. In Proc. of EmotiW, ICMI. ACM, Istanbul, Turkey, 473--480. Google ScholarDigital Library
- F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller . 2015 a. Prediction of Asynchronous Dimensional Emotion Ratings from Audiovisual and Physiological Data. Pattern Recognition Letters Vol. 66 (November . 2015), 22--30. Google ScholarDigital Library
- F. Ringeval, B. Schuller, M. Valstar, S. Jaiswal, E. Marchi, D. Lalanne, R. Cowie, and M. Pantic . 2015 b. AvGoogle Scholar
- ec 2015: The first affect recognition challenge bridging across audio, video, and physiological data. In Proc. of the 5th International Workshop on Audio/Visual Emotion Challenge. ACM, 3--8. Google ScholarDigital Library
- F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne . 2013. Introducing the RECOLA Multimodal Corpus of Remote Collaborative and Affective Interactions. In Proc. 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE.Google ScholarCross Ref
- W. Ruch and P. Ekman . 2001. The expressive pattern of laughter. Emotion, qualia, and consciousness (2001), 426--443.Google Scholar
- S. Scherer, F. Schwenker, N. Campbell, and G. Palm . 2009. Multimodal laughter detection in natural discourses. Human Centered Robot Systems. Springer, 111--120.Google Scholar
- M. Schröder . 2003. Experimental study of affect bursts. Speech communication, Vol. 40, 1 (2003), 99--116. Google ScholarDigital Library
- B. Schuller . 2012. The computational paralinguistics challenge {social sciences}. IEEE Signal Processing Magazine Vol. 29, 4 (2012), 97--101.Google ScholarCross Ref
- I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty . 2012. The Belfast induced natural emotion database. IEEE Transactions on Affective Computing Vol. 3, 1 (2012), 32--41. Google ScholarDigital Library
- R. Stibbard . 2000. Automated extraction of ToBI annotation data from the Reading/Leeds emotional speech corpus Proc. ISCA Tutorial and Research Workshop (ITRW) on Speech and Emotion.Google Scholar
- M. T. Suarez, J. Cu, and M. Sta . 2012. Building a Multimodal Laughter Database for Emotion Recognition. Proc. LREC. 2347--2350.Google Scholar
- D.P. Szameitat, K. Alter, A.J. Szameitat, C.J. Darwin, D. Wildgruber, S. Dietrich, and A. Sterr . 2009. Differentiation of emotions in laughter at the behavioral level. Emotion, Vol. 9, 3 (2009), 397.Google ScholarCross Ref
- J. Trouvain . 2001. Phonetic Aspects of Speech-Laughs. In Proc. of the Conference on Orality & Gestuality (ORAGE). 634--639.Google Scholar
- J. Trouvain . 2003. Segmenting Phonetic Units in Laughter. In Proc. of the 15th International Conference of Phonetic Sciences. Barcelona Spain, 2793--2796.Google Scholar
- K. Truong and J. Trouvain . 2012. Laughter annotations in conversational speech corpora-possibilities and limitations for phonetic analysis. Proceedings of the 4th International Worskhop on Corpora for Research on Emotion Sentiment and Social Signals (2012), 20--24.Google Scholar
- J. Urbain . 2014. Acoustic Laughter Processingn. Ph.D. Dissertation. bibinfoschoolUniversity of Mons.Google Scholar
- J. Urbain, E. Bevacqua, T. Dutoit, A. Moinet, R. Niewiadomski, C. Pelachaud, B. Picart, J. Tilmanne, and J. Wagner . 2010. The AVLaughterCycle Database.. In Proc. LREC.Google Scholar
- M. Valstar, J. Gratch, B. Schuller, F. Ringeval, D. Lalanne, M. Torres Torres, S. Scherer, G. Stratou, R. Cowie, and M. Pantic . 2016. AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge Proc. of the 6th International Workshop on Audio/Visual Emotion Challenge. ACM, 3--10. Google ScholarDigital Library
- M. Valstar, B. Schuller, K. Smith, T. Almaev, F. Eyben, J. Krajewski, R. Cowie, and M. Pantic . 2014. AVEC 2014 -- The Three Dimensional Affect and Depression Challenge Proc. of ACM MM. Orlando (FL), USA. Google ScholarDigital Library
Index Terms
- Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals
Recommendations
Laughter entrainment in dyadic interactions: Temporal distribution and form
AbstractIt has been established across a wide range of communicative behaviours that conversational partners tend to become more similar during their interaction. This phenomenon, often called entrainment, has been shown to take place not only ...
Highlights- Laughter entrainment examined across languages: French, German and Mandarin Chinese.
Audiovisual Affect Recognition in Spontaneous Filipino Laughter
KSE '11: Proceedings of the 2011 Third International Conference on Knowledge and Systems EngineeringLaughter has been determined as an important social signal that can predict emotional information of users. This paper presents an extension of a previous study that discovers underlying affect in Filipino laughter using audio features, a posed laughter ...
Automatic understanding of affective and social signals by multimodal mimicry recognition
ACII'11: Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part IIHuman mimicry is one of the important behavioral cues displayed during social interaction that inform us about the interlocutors' interpersonal states and attitudes. For example, the absence of mimicry is usually associated with negative attitudes. A ...
Comments