ABSTRACT
Automatic analysis of facial expressions of emotion has been an active research topic of computer vision and machine learning communities. Building person and culture independent models is the main challenge for both communities. We need effective, yet adaptive methods to cope with this challenge. In this study, we present a novel method for recognizing facial expressions of emotion based on the developed facial action unit (AU) detector and using rule-based reasoning. Our AU detector, also referred to as Adaptive Decision Thresholding based AU detector (ADT-AU) performs decision threshold analysis for each AU using a fitness function to learn the optimum decision threshold of the binary learning method. We choose the Support Vector Machine (SVM) algorithm as the binary method and utilize Active Appearance Model (AAM) features. Using ADT-AU detector we detect 17 AUs occurring alone or in combination and recognize six facial expressions of emotion (e.g., surprise, fear, happiness, etc.) using the prototypic and major variants of AUs by our rule-based emotion classifier. Our experiments on Extended Cohn-Kanade (CK+) dataset show that the proposed method outperforms baseline method that uses standard decision threshold and provides significant improvements on most of the facial expressions with an average F1-score of 5.59%.
- N. Aksoy and M. Sert. Facial action unit detection using variable decision thresholds. In 2016 24th Signal Processing and Communication Application Conference (SIU), pages 2185--2188, May 2016.Google ScholarCross Ref
- C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1--27:27, May 2011. Google ScholarDigital Library
- W. S. Chu, F. de la Torre, and J. Cohn. Selective transfer machine for personalized facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1--1, 2016.Google Scholar
- T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6):681--685, Jun 2001. Google ScholarDigital Library
- P. Ekman, W. V. Friesen, , and J. C. Hager. FACS Manual, chapter Facial Action Coding System, pages 173--174. 2002.Google Scholar
- P. Ekman and W. V. Friesen. The Facial Action Coding System: A technique for measurement of facial movement. Consulting Psychologist Press, Palo Alto, CA, 1978.Google Scholar
- F. D. la Torre, W. S. Chu, X. Xiong, F. Vicente, X. Ding, and J. Cohn. Intraface. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, volume 1, pages 1--8, May 2015.Google ScholarCross Ref
- Y. Li, S. Wang, Y. Zhao, and Q. Ji. Simultaneous facial feature tracking and facial expression recognition. IEEE Transactions on Image Processing, 22(7):2559--2573, July 2013.Google ScholarCross Ref
- P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pages 94--101, June 2010.Google ScholarCross Ref
- C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Computer Vision Workshops (ICCVW), 2013 IEEE International Conference on, pages 397--403, Dec 2013. Google ScholarDigital Library
- T. Senechal, K. Bailly, and L. Prevost. Impact of action unit detection in automatic emotion recognition. Pattern Analysis and Applications, 17(1):51--67, 2014. Google ScholarDigital Library
- C. D. G. C. H. G. R. C. N. A. Shah, M. and R. Verma. Action unit models of facial expression of emotion in the presence of speech. In Affective Computing and Intelligent Interaction, pages 49--54, Geneva, Switzerland, 2013. Google ScholarDigital Library
- K. T. Song and S. C. Chien. Facial expression recognition based on mixture of basic expressions and intensities. In 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 3123--3128, Oct 2012.Google ScholarCross Ref
- T. F. Su, C. H. Weng, and S. H. Lai. Novel facial expression recognition by combining action unit detection with sparse representation classification. In Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, volume 2, pages 719--725, July 2015. Google ScholarDigital Library
- Y.-L. Tian, T. Kanade, and J. F. Cohn. Facial Expression Analysis, pages 247--275. Springer New York, New York, NY, 2005.Google Scholar
- M. F. Valstar and M. Pantic. Fully automatic recognition of the temporal phases of facial actions. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42(1):28--43, Feb 2012. Google ScholarDigital Library
- A. YÃijce, M. Sorci, and J. P. Thiran. Improved local binary pattern based action unit detection using morphological and bilateral filters. In Automatic Face and Gesture Recognition (FG), 2013 10th IEEE International Conference and Workshops on, pages 1--7, April 2013.Google Scholar
- X. Zhang, M. H. Mahoor, S. M. Mavadati, and J. F. Cohn. A lp-norm mtmkl framework for simultaneous detection of multiple facial action units. In IEEE Winter Conference on Applications of Computer Vision, pages 1104--1111, March 2014.Google ScholarCross Ref
- K. Zhao, W.-S. Chu, F. D. la Torre, J. F. Cohn, and H. Zhang. Joint patch and multi-label learning for facial action unit detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2207--2216, June 2015.Google ScholarCross Ref
- K. Zhao, W. S. Chu, F. D. la Torre, J. F. Cohn, and H. Zhang. Joint patch and multi-label learning for facial action unit and holistic expression recognition. IEEE Transactions on Image Processing, 25(8):3931--3946, Aug 2016.Google ScholarDigital Library
Index Terms
- Recognizing facial expressions of emotion using action unit specific decision thresholds
Recommendations
Recognizing Action Units for Facial Expression Analysis
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more ...
Shape and texture based facial action and emotion recognition
AAMAS '14: Proceedings of the 2014 international conference on Autonomous agents and multi-agent systemsIn this paper, we present an intelligent facial emotion recognition system with real-time face tracking for a humanoid robot. The system is able to detect facial actions and emotions from images with up to 60 degrees of pose variations. We employ the ...
Emotion recognition using facial expressions with active appearance models
HCI '08: Proceedings of the Third IASTED International Conference on Human Computer InteractionRecognizing emotion using facial expressions is a key element in human communication. In this paper we discuss a framework for the classification of emotional states, based on still images of the face. The technique we present involves the creation of ...
Comments