skip to main content
10.1145/3056540.3056546acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Towards More Robust Automatic Facial Expression Recognition in Smart Environments

Published: 21 June 2017 Publication History

Abstract

In this paper, we provide insights towards achieving more robust automatic facial expression recognition in smart environments based on our benchmark with three labeled facial expression databases. These databases are selected to test for desktop, 3D and smart environment application scenarios. This work is meant to provide a neutral comparison and guidelines for developers and researchers interested to integrate facial emotion recognition technologies in their applications, understand its limitations and adaptation as well as enhancement strategies. We also introduce and compare three different metrics for finding the primary expression in a time window of a displayed emotion. In addition, we outline facial emotion recognition limitations and enhancements for smart environments and non-frontal setups. By providing our comparison and enhancements we hope to build a bridge from affective computing research and solution providers to application developers that like to enhance new applications by including emotion based user modeling.

References

[1]
M. Abouelenien, M. Burzo, and R. Mihalcea. Human acute stress detection via integration of physiological signals and thermal imaging. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '16, pages 32:1--32:8, New York, NY, USA, 2016. ACM.
[2]
M. Bartlett, G. Littlewort, T. Wu, and J. Movellan. Computer expression recognition toolbox. In 2008 8th IEEE International Conference on Automatic Face Gesture Recognition, pages 1--2, Sept 2008.
[3]
N. Bosch, S. D'Mello, R. Baker, J. Ocumpaugh, V. Shute, M. Ventura, L. Wang, and W. Zhao. Automatic detection of learning-centered affective states in the wild. In Proceedings of the 20th International Conference on Intelligent User Interfaces, IUI '15, pages 379--388, New York, NY, USA, 2015. ACM.
[4]
H. Brauer, C. Grecos, and K. von Luck. Robust False Positive Detection for Real-Time Multi-target Tracking, pages 450--459. Springer International Publishing, Cham, 2014.
[5]
R. A. Calvo and S. D'Mello. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1):18--37, Jan 2010.
[6]
D. J. Cook and S. K. Das. How smart are our environments? an updated look at the state of the art. Pervasive and Mobile Computing, 3(2):53--73, 2007. Design and Use of Smart Environments.
[7]
A. Dhall, R. Goecke, S. Lucey, and T. Gedeon. Collecting large, richly annotated facial-expression databases from movies. IEEE MultiMedia, 19(3):34--41, July 2012.
[8]
S. D'Mello, A. Graesser, and R. W. Picard. Toward an affect-sensitive autotutor. IEEE Intelligent Systems, 22(undefined):53--61, 2007.
[9]
S. K. D'mello and J. Kory. A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv., 47(3):43:1--43:36, Feb. 2015.
[10]
P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
[11]
S. Eleftheriadis, O. Rudovic, and M. Pantic. Joint facial action unit detection and feature fusion: A multi-conditional learning approach. IEEE Transactions on Image Processing, 25(12):5727--5742, Dec 2016.
[12]
J. Ellenberg, B. Karstaedt, S. Voskuhl, K. von Luck, and B. Wendholt. An environment for context-aware applications in smart homes. In International Conference on Indoor Positioning and Indoor Navigation (IPIN), Guimaraes, Portugal, 2011.
[13]
W. Friesen and P. Ekman. EMFACS-7: Emotional Facial Action Coding System. Unpublished manual, University of California, California.
[14]
T. Kanade, J. F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on, pages 46--53. IEEE, 2000.
[15]
E. Kanjo, L. Al-Husain, and A. Chamberlain. Emotions in context: examining pervasive affective sensing systems, applications, and analyses. Personal and Ubiquitous Computing, 19(7):1197--1212, 2015.
[16]
P. Lewinski. Automated facial coding software outperforms people in recognizing neutral faces as neutral from standardized datasets. Frontiers in Psychology, 6(1386), 2015.
[17]
G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. The computer expression recognition toolbox (cert). In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 298--305. IEEE, 2011.
[18]
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pages 94--101, June 2010.
[19]
I. Maglogiannis. Human centered computing for the development of assistive environments: The sthenos project. In Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '14, pages 29:1--29:7, New York, NY, USA, 2014. ACM.
[20]
D. McDuff, A. N. Mahmoud, M. Mavadati, M. Amr, J. Turcot, and R. E. Kaliouby. AFFDEX SDK: A cross-platform real-time multi-face expression recognition toolkit. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, May 7-12, 2016, Extended Abstracts, pages 3723--3726, 2016.
[21]
P. Michel and R. El Kaliouby. Real time facial expression recognition in video using support vector machines. In Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI '03, pages 258--264, New York, NY, USA, 2003. ACM.
[22]
D. Mihai, G. Florin, and M. Gheorghe. Using dual camera smartphones as advanced driver assistance systems: Navieyes system architecture. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '15, pages 23:1--23:8, New York, NY, USA, 2015. ACM.
[23]
G. Mone. Sensing emotions. Commun. ACM, 58(9):15--16, Aug. 2015.
[24]
L. Müller, A. Bernin, C. Grecos, Q. Wang, K. von Luck, and F. Vogt. Physiological data analysis for an emotional provoking exergame. In Proceedings of the IEEE Symposium for Computational Intelligence. IEEE, Athens, Greece, 2016.
[25]
L. Müller, S. Zagaria, A. Bernin, A. Amira, N. Ramzan, C. Grecos, and F. Vogt. Emotionbike: a study of provoking emotions in cycling exergames. In Entertainment Computing-ICEC 2015, pages 155--168. Springer, 2015.
[26]
R. Patton. Software Testing (2Nd Edition). Sams, Indianapolis, IN, USA, 2005.
[27]
R. W. Picard. Affective Computing. MIT Press, Cambridge, MA, USA, 1997.
[28]
C. Smailis, N. Sarafianos, T. Giannakopoulos, and S. Perantonis. Fusing active orientation models and mid-term audio features for automatic depression estimation. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '16, pages 39:1--39:4, New York, NY, USA, 2016. ACM.
[29]
M. F. Valstar, B. Jiang, M. Mehu, M. Pantic, and K. Scherer. The first facial expression recognition and analysis challenge. In Automatic Face Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 921--926, March 2011.
[30]
P. Viola and M. Jones. Robust real-time object detection. In International Journal of Computer Vision, 2001.
[31]
L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato. A 3d facial expression database for facial behavior research. In 7th International Conference on Automatic Face and Gesture Recognition (FGR06), pages 211--216, April 2006.
[32]
S. Zafeiriou, C. Zhang, and Z. Zhang. A survey on face detection in the wild: Past, present and future. Computer Vision and Image Understanding, 138:1 --24, 2015.
[33]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39--58, Jan 2009.
[34]
X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, P. Liu, and J. M. Girard. Bp4d-spontaneous: a high-resolution spontaneous 3d dynamic facial expression database. Image and Vision Computing, 32(10):692--706, 2014. Best of Automatic Face and Gesture Recognition 2013.

Cited By

View all
  • (2023)Agreement and disagreement between major emotion recognition systemsKnowledge-Based Systems10.1016/j.knosys.2023.110759276:COnline publication date: 27-Sep-2023
  • (2022)Creative problem solving and facial expressions: A stage based comparisonPLOS ONE10.1371/journal.pone.026950417:6(e0269504)Online publication date: 22-Jun-2022
  • (2022)A unified valence scale based on diagnosis of facial expressionsProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/107118132266150066:1(1056-1059)Online publication date: 27-Oct-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '17: Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments
June 2017
455 pages
ISBN:9781450352277
DOI:10.1145/3056540
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

In-Cooperation

  • NSF: National Science Foundation
  • CSE@UTA: Department of Computer Science and Engineering, The University of Texas at Arlington

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 June 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Affective Computing
  2. Application specific Clustering of Emotions
  3. Benchmark
  4. Facial Expression Recognition

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

PETRA '17

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)1
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Agreement and disagreement between major emotion recognition systemsKnowledge-Based Systems10.1016/j.knosys.2023.110759276:COnline publication date: 27-Sep-2023
  • (2022)Creative problem solving and facial expressions: A stage based comparisonPLOS ONE10.1371/journal.pone.026950417:6(e0269504)Online publication date: 22-Jun-2022
  • (2022)A unified valence scale based on diagnosis of facial expressionsProceedings of the Human Factors and Ergonomics Society Annual Meeting10.1177/107118132266150066:1(1056-1059)Online publication date: 27-Oct-2022
  • (2021)Age Bias in Emotion Detection: An Analysis of Facial Emotion Recognition Performance on Young, Middle-Aged, and Older AdultsProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3461702.3462609(638-644)Online publication date: 21-Jul-2021
  • (2021)Personality, Emotions, and CreativityCreativity10.1017/9781108776721.010(152-175)Online publication date: 21-Apr-2021
  • (2021)Migratable AI : Investigating Users’ Affect on Identity and Information Migration of a Conversational AI AgentSocial Robotics10.1007/978-3-030-90525-5_22(257-267)Online publication date: 2-Nov-2021
  • (2021)Affective Guide for Museum: A System to Suggest Museum Paths Based on Visitors’ EmotionsUniversal Access in Human-Computer Interaction. Design Methods and User Experience10.1007/978-3-030-78092-0_35(521-532)Online publication date: 3-Jul-2021
  • (2020)Benchmarking commercial emotion detection systems using realistic distortions of facial image datasetsThe Visual Computer10.1007/s00371-020-01881-x37:6(1447-1466)Online publication date: 23-Jun-2020
  • (2019)A Comparative Analysis of Emotion-Detecting AI Systems with Respect to Algorithm Performance and Dataset DiversityProceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society10.1145/3306618.3314284(377-382)Online publication date: 27-Jan-2019
  • (2019)Frustratingly Easy Personalization for Real-time Affect Interpretation of Facial Expression2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII)10.1109/ACII.2019.8925515(531-537)Online publication date: Sep-2019
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media