skip to main content
10.1145/3005467.3005471acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Recognizing facial expressions of emotion using action unit specific decision thresholds

Published:12 November 2016Publication History

ABSTRACT

Automatic analysis of facial expressions of emotion has been an active research topic of computer vision and machine learning communities. Building person and culture independent models is the main challenge for both communities. We need effective, yet adaptive methods to cope with this challenge. In this study, we present a novel method for recognizing facial expressions of emotion based on the developed facial action unit (AU) detector and using rule-based reasoning. Our AU detector, also referred to as Adaptive Decision Thresholding based AU detector (ADT-AU) performs decision threshold analysis for each AU using a fitness function to learn the optimum decision threshold of the binary learning method. We choose the Support Vector Machine (SVM) algorithm as the binary method and utilize Active Appearance Model (AAM) features. Using ADT-AU detector we detect 17 AUs occurring alone or in combination and recognize six facial expressions of emotion (e.g., surprise, fear, happiness, etc.) using the prototypic and major variants of AUs by our rule-based emotion classifier. Our experiments on Extended Cohn-Kanade (CK+) dataset show that the proposed method outperforms baseline method that uses standard decision threshold and provides significant improvements on most of the facial expressions with an average F1-score of 5.59%.

References

  1. N. Aksoy and M. Sert. Facial action unit detection using variable decision thresholds. In 2016 24th Signal Processing and Communication Application Conference (SIU), pages 2185--2188, May 2016.Google ScholarGoogle ScholarCross RefCross Ref
  2. C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1--27:27, May 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. W. S. Chu, F. de la Torre, and J. Cohn. Selective transfer machine for personalized facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1--1, 2016.Google ScholarGoogle Scholar
  4. T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):681--685, Jun 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. Ekman, W. V. Friesen, , and J. C. Hager. FACS Manual, chapter Facial Action Coding System, pages 173--174. 2002.Google ScholarGoogle Scholar
  6. P. Ekman and W. V. Friesen. The Facial Action Coding System: A technique for measurement of facial movement. Consulting Psychologist Press, Palo Alto, CA, 1978.Google ScholarGoogle Scholar
  7. F. D. la Torre, W. S. Chu, X. Xiong, F. Vicente, X. Ding, and J. Cohn. Intraface. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1--8, May 2015.Google ScholarGoogle ScholarCross RefCross Ref
  8. Y. Li, S. Wang, Y. Zhao, and Q. Ji. Simultaneous facial feature tracking and facial expression recognition. IEEE Transactions on Image Processing, 22(7):2559--2573, July 2013.Google ScholarGoogle ScholarCross RefCross Ref
  9. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pages 94--101, June 2010.Google ScholarGoogle ScholarCross RefCross Ref
  10. C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Computer Vision Workshops (ICCVW), 2013 IEEE International Conference on, pages 397--403, Dec 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. T. Senechal, K. Bailly, and L. Prevost. Impact of action unit detection in automatic emotion recognition. Pattern Analysis and Applications, 17(1):51--67, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. C. D. G. C. H. G. R. C. N. A. Shah, M. and R. Verma. Action unit models of facial expression of emotion in the presence of speech. In Affective Computing and Intelligent Interaction, pages 49--54, Geneva, Switzerland, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. K. T. Song and S. C. Chien. Facial expression recognition based on mixture of basic expressions and intensities. In 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3123--3128, Oct 2012.Google ScholarGoogle ScholarCross RefCross Ref
  14. T. F. Su, C. H. Weng, and S. H. Lai. Novel facial expression recognition by combining action unit detection with sparse representation classification. In Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, volume 2, pages 719--725, July 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Y.-L. Tian, T. Kanade, and J. F. Cohn. Facial Expression Analysis, pages 247--275. Springer New York, New York, NY, 2005.Google ScholarGoogle Scholar
  16. M. F. Valstar and M. Pantic. Fully automatic recognition of the temporal phases of facial actions. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(1):28--43, Feb 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. YÃijce, M. Sorci, and J. P. Thiran. Improved local binary pattern based action unit detection using morphological and bilateral filters. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pages 1--7, April 2013.Google ScholarGoogle Scholar
  18. X. Zhang, M. H. Mahoor, S. M. Mavadati, and J. F. Cohn. A lp-norm mtmkl framework for simultaneous detection of multiple facial action units. In IEEE Winter Conference on Applications of Computer Vision, pages 1104--1111, March 2014.Google ScholarGoogle ScholarCross RefCross Ref
  19. K. Zhao, W.-S. Chu, F. D. la Torre, J. F. Cohn, and H. Zhang. Joint patch and multi-label learning for facial action unit detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2207--2216, June 2015.Google ScholarGoogle ScholarCross RefCross Ref
  20. K. Zhao, W. S. Chu, F. D. la Torre, J. F. Cohn, and H. Zhang. Joint patch and multi-label learning for facial action unit and holistic expression recognition. IEEE Transactions on Image Processing, 25(8):3931--3946, Aug 2016.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Recognizing facial expressions of emotion using action unit specific decision thresholds

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          ASSP4MI '16: Proceedings of the 2nd Workshop on Advancements in Social Signal Processing for Multimodal Interaction
          November 2016
          33 pages
          ISBN:9781450345576
          DOI:10.1145/3005467

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 November 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          ASSP4MI '16 Paper Acceptance Rate4of6submissions,67%Overall Acceptance Rate4of6submissions,67%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader