skip to main content
10.1145/2531923.2531925acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
short-paper

Evaluation of vision-based real-time measures for emotions discrimination under uncontrolled conditions

Published:09 December 2013Publication History

ABSTRACT

Several vision-based systems for automatic recognition of emotion have been proposed in the literature. However most of these systems are evaluated only under controlled laboratory conditions. These controlled conditions poorly represent the constraints faced in real-world ecological situations. In this paper, two studies are described. In the first study we evaluate whether two robust vision-based measures (approach-avoidance detection and quantity of motion) can be used to discriminate between different emotions in a dataset containing acted facial expressions under uncontrolled conditions. In the second study we evaluate in the same dataset the accuracy of a commercially available software used for automatic emotion recognition under controlled conditions. Results showed that the evaluated measures are able to discriminate different emotions in uncontrolled conditions. In addition, the accuracy of the commercial software evaluated is reported.

References

  1. Adams, R.G., Ambady, N., Macrae, C. N., and Kleck, R. E. 2012. Emotional expressions forecast approach-avoidance behavior. Motivation and Emotion. 30, 2, 177--186.Google ScholarGoogle ScholarCross RefCross Ref
  2. Anderson, K and McOwan, P. W. 2006. A real-time automated system for recognition of human facial expressions. IEEE Trans. Systems, Man, and Cybernetics- Part B. 36, 1, 96--105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Asteriadis, S., Karpouzis, K., and Kollias, S. 2009. Feature extraction and selection for inferring user engagement in an hci environment. HCI International. (July 2009), 19--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Benta, K.-I., Van Kuilenburg, H., Eligio, U.X., Den Uyl, M., Cremene, M., Hoszu, A., and Cret, O. 2009. Evaluation of a system for real-time valence assessment of spontaneous facial expressions. Distributed Environments Adaptability, Semantics and Security Issues, International Romanian -- French Workshop.Google ScholarGoogle Scholar
  5. Bettadapura, V. 2012. Face Expression Recognition and Analysis: The State of the Art. Technical Report. College of Computing, Georgia Institute of Technology.Google ScholarGoogle Scholar
  6. Cootes, T.F., Edwards, G. J., and Taylor, C. J. 2001. Active appearance models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 23, 6, 681--685. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Davis, J. and Bobick, A. F. 1997. The representation and recognition of human movements using temporal templates. In Proceedings of the IEEE CVPR, 928--934. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Den Uyl, M. J. and Van Kuilenburg, H. 2005 The FaceReader: Online Facial Expression Recognition. In Proceedings of Measuring Behavior, 589--590.Google ScholarGoogle Scholar
  9. Dhall, A., Goecke, R., Joshi, J., Wagner, M., and Gedeon, T. 2013. Emotion Recognition In The Wild Challenge And Workshop 2013, ACM ICMI 2013. Google ScholarGoogle Scholar
  10. D'Mello, S. K. and Graesser, A. C. 2012. AutoTutor and Affective AutoTutor: Learning by Talking with Cognitively and Emotionally Intelligent Computers that Talk Back, ACM Transactions on Interactive Intelligent Systems. 2, 4, 23:2--23:39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Fasel, B. and Luttin, J. 2003. Automatic Facial Expression Analysis: a survey. Pattern Recognition. 36, 1, 259--275.Google ScholarGoogle ScholarCross RefCross Ref
  12. Gómez Jáuregui, D. A., Philip, L., Céline, C., Padovani, S., Bailly, M., and Martin, J.-C. 2013 Video Analysis of Approach-Avoidance Behaviors of Teenagers Speaking with Virtual Agents. In Proceedings of the 15th International Conference on Multimodal Interfaces (ICMI 2013). To appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Guo, G. and Dyer C. R. 2005. Learning from examples in the small sample case - face expression recognition. In IEEE Trans. Systems, Man and Cybernetics -- Part B. 35, 3, 477--488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lucey, S., Ashraf, A.B., and Cohn, J. F. 2007. Investigating Spontaneous Facial Action Recognition through AAM Representations of the Face. In Face Recognition Book, K. Kurihara, Ed. Pro Literatur Verlag, 275--286.Google ScholarGoogle Scholar
  15. McDuff, D., Kaliouby R., Senechal T., Amr M., Cohn J., and Picard R.W. 2013. Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild, The 2013 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW'10), (June 2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Morency, L.P., Rahimi, A. and Darrel, T. 2013. Adaptive view-based appearance models. In Computer Vision and Pattern Recognition. 1, 803--812. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Pantic, M. and Rothkrantz, L. 2004. Facial action recognition for facial expression analysis from static face images. IEEE Trans. Syst., Man, Cybern. B, Cybern. 34, 3, 1449--1461. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Piana, S., Stagliano, A., Camurri, A., and Odone, F. 2013. A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In IDGEI International Workshop.Google ScholarGoogle Scholar
  19. Russell, J. A. and A. Mehrabian. 1977. Evidence for a three-factor theory of emotions. Journal of Research in Personality. 11, 273--294.Google ScholarGoogle ScholarCross RefCross Ref
  20. Sebe, N., Lew, M.S., Cohen, I., Sun, Y., Gevers, T., and Huang, T.S. 2004. Authentic Facial Expression Analysis, In International Conference on Automatic Face and Gesture Recognition, 517--522. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Tariq, U., Lin, K-H., Li, Z., Zhou, X., Wang, Z., Le, V., Huang, T. S., Lv, X., and Han, T. X. 2011. Emotion recognition from an ensemble of features. In Ninth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2011), 872---87.Google ScholarGoogle Scholar
  22. Tian, Y. L., Kanade, T., and Cohn, J. F. 2005. Facial expression analysis. In Handbook of Face Recognition, S. Z. Li and A. K. Jain. Eds. Springer, 247--276Google ScholarGoogle Scholar
  23. Valstar, M., Pantic, M., and Patras, I. 2004. Motion history for facial action detection from face video. In Int. Conf. Systems, Man and Cybernetics. 1, 635--640.Google ScholarGoogle Scholar
  24. Wu, T., Bartlett, M.S., and Movellan, J.R. 2010. Facial expression recognition using Gabor motion energy filters. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 42--47.Google ScholarGoogle Scholar
  25. Zeng, Z., Pantic, M., Roisman, G.I., and Huang, T.S. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence. 31, 1, 39--58. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Zhu, Z. and Ji, Q. 2005. Robust real-time eye detection and tracking under variable lighting conditions and various face orientations. Computer Vision and Image Understanding. 98, 1, 124--154. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Evaluation of vision-based real-time measures for emotions discrimination under uncontrolled conditions

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                EmotiW '13: Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop
                December 2013
                28 pages
                ISBN:9781450325646
                DOI:10.1145/2531923

                Copyright © 2013 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 9 December 2013

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • short-paper

                Acceptance Rates

                EmotiW '13 Paper Acceptance Rate3of8submissions,38%Overall Acceptance Rate3of8submissions,38%

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader