skip to main content
10.1145/3197768.3201527acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Automatic Classification and Shift Detection of Facial Expressions in Event-Aware Smart Environments

Published: 26 June 2018 Publication History

Abstract

Affective application developers often face a challenge in integrating the output of facial expression recognition (FER) software in interactive systems: although many algorithms have been proposed for FER, integrating the results of these algorithms into applications remains difficult. Due to inter-and within-subject variations further post-processing is needed. Our work addresses this problem by introducing and comparing three post-processing classification algorithms for FER output applied to an event-based interaction scheme to pinpoint the affective context within a time window. Our comparison is based on earlier published experiments with an interactive cycling simulation in which participants were provoked with game elements and their facial expression responses were analysed by all three algorithms with a human observer as reference. The three post-processing algorithms we investigate are mean fixed-window, matched filter, and Bayesian changepoint detection. In addition, we introduce a novel method for detecting fast transition of facial expressions, which we call emotional shift. The proposed detection pattern is suitable for affective applications especially in smart environments, wherever users' reactions can be tied to events.

References

[1]
I. Bacivarov and P. M. Corcoran. Facial expression modeling using component aam models. In 2009 International IEEE Consumer Electronics Society's Games Innovations Conference, pages 1--16, Aug 2009.
[2]
M. Bartlett, G. Littlewort, T. Wu, and J. Movellan. Computer expression recognition toolbox. In 2008 8th IEEE International Conference on Automatic Face Gesture Recognition, pages 1--2, Sept 2008.
[3]
A. Bernin, L. Müller, S. Ghose, K. von Luck, C. Grecos, Q. Wang, and F. Vogt. Towards more robust automatic facial expression recognition in smart environments. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '17, pages 37--44, New York, NY, USA, 2017. ACM.
[4]
E. Billauer. peakdet: Peak detection using matlab, Apr. 2005. http://www.billauer.co.il/peakdet.html.
[5]
P. M. Blom, S. Bakkes, C. T. Tan, S. Whiteson, D. Roijers, R. Valenti, and T. Gevers. Towards personalised gaming via facial expression recognition. In Proceedings of the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE'14, pages 30--36. AAAI Press, 2014.
[6]
J. Broekens, T. Bosse, and S. C. Marsella. Challenges in computational modeling of affective processes. IEEE Trans. Affect. Comput., 4(3):242--245, July 2013.
[7]
R. A. Calvo and S. D'Mello. Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing,1(1): 18--37, Jan 2010.
[8]
A. Doshi and M. M. Trivedi. Tactical driver behavior prediction and intent inference: A review. In 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2011.
[9]
P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
[10]
P. Fearnhead. Exact and efficient bayesian inference for multiple changepoint problems. Statistics and Computing, 16(2):203--213, Jun 2006.
[11]
E. T. Jaynes. Probability theory: The logic of science. Cambridge University Press, Campbride, 2003.
[12]
E. Kanjo, L. Al-Husain, and A. Chamberlain. Emotions in context: examining pervasive affective sensing systems, applications, and analyses. Personal and Ubiquitous Computing,19(7):1197--1212,2015.
[13]
S. Koelstra, M. Pantic, and I. Patras. A dynamic texture-based approach to recognition of facial actions and their temporal models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(11):1940--1954, Nov 2010.
[14]
O. Korn, S. Boffo, and A. Schmidt. The effect of gamification on emotions - the potential of facial recognition in work environments. In M. Kurosu, editor, Human-Computer Interaction: Design and Evaluation, pages 489--499, Cham, 2015. Springer International Publishing.
[15]
E. Lee. Cyber Physical Systems: Design Challenges, pages 363--369. 06 2008.
[16]
G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan. Dynamics of facial expression extracted automatically from video. Image and Vision Computing, 24(6):615--625, 2006. Face Processing in Video Sequences.
[17]
G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, and M. Bartlett. The computer expression recognition toolbox (cert). In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 298--305. IEEE, 2011.
[18]
I. Maglogiannis. Human centered computing for the development of assistive environments: The sthenos project. In Proceedings of the 7th International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '14, pages 29:1--29:7, New York, NY, USA, 2014. ACM.
[19]
J. C. McCall and M. M. Trivedi. Driver behavior and situation aware brake assistance for intelligent vehicles. Proceedings of the IEEE,95(2):374--387, Feb 2007.
[20]
D. McDuff, A. N. Mahmoud, M. Mavadati, M. Amr, J. Turcot, and R. E. Kaliouby. AFFDEX SDK: A cross-platform real-time multi-face expression recognition toolkit. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 2016, pages 3723--3726, 2016.
[21]
P. Michel and R. El Kaliouby. Real time facial expression recognition in video using support vector machines. In Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI '03, pages 258--264, New York, NY, USA, 2003. ACM.
[22]
D. Mihai, G. Florin, and M. Gheorghe. Using dual camera smartphones as advanced driver assistance systems: Navieyes system architecture. In Proceedings of the 8th ACM International Conference on PErvasive Technologies Related to Assistive Environments, PETRA '15, pages 23:1--23:8, New York, NY, USA, 2015. ACM.
[23]
L. Müller, A. Bernin, C. Grecos, Q. Wang, K. von Luck, and F. Vogt. Physiological data analysis for an emotional provoking exergame. In Proceedings of the IEEE Symposium for Computational Intelligence. IEEE, Athens, Greece, 2016.
[24]
L. Müller, S. Zagaria, A. Bernin, A. Amira, N. Ramzan, C. Grecos, and F. Vogt. Emotionbike: a study of provoking emotions in cycling exergames. In Entertainment Computing-ICEC 2015, pages 155--168. Springer, 2015.
[25]
L. H. Negri and C. Vestri. lucashn/peakutils: v1.1.0, Sept. 2017.
[26]
R. W. Picard. Affective computing: challenges. International Journal of Human-Computer Studies, 59(1):55--64, 2003. Applications of Affective Computing in Human-Computer Interaction.
[27]
E. Sariyanidi, H. Gunes, and A. Cavallaro. Automatic analysis of facial affect: A survey of registration, representation, and recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(6):1113--1133, June 2015.
[28]
K. R. Scherer, T. Banziger, and E. Roesch. Outlook: Integration and future perspectives for affective computing. 2010.
[29]
M. F. Valstar, B. Jiang, M. Mehu, M. Pantic, and K. Scherer. The first facial expression recognition and analysis challenge. In Automatic Face Gesture Recognition and Workshops (FG 2011), pages 921--926, March 2011.
[30]
C.-H. Wu, J.-C. Lin, and W.-L. Wei. Survey on audiovisual emotion recognition: databases, features, and data fusion strategies. APSIPA Transactions on Signal and Information Processing, 3, 2014.
[31]
X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, pages 1055--1062, New York, NY, USA, 2007. ACM.
[32]
W.-J. Yan, Q. Wu, J. Liang, Y.-H. Chen, and X. Fu. How fast are the leaked facial expressions: The duration of micro-expressions. Journal of Nonverbal Behavior, 37(4):217--230, Dec 2013.
[33]
G. N. Yannakakis and H. P. Martinez. Grounding truth via ordinal annotation. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pages 574--580, Sept 2015.
[34]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39--58, Jan 2009.

Cited By

View all
  • (2021)Context-Aware Emotion Recognition in the Wild Using Spatio-Temporal and Temporal-Pyramid ModelsSensors10.3390/s2107234421:7(2344)Online publication date: 27-Mar-2021
  • (2020)The current challenges of automatic recognition of facial expressionsAI Communications10.3233/AIC-20063133:3-6(113-138)Online publication date: 1-Jan-2020
  • (2019)Facial Expression Recognition Using Computer Vision: A Systematic ReviewApplied Sciences10.3390/app92146789:21(4678)Online publication date: 2-Nov-2019

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
PETRA '18: Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference
June 2018
591 pages
ISBN:9781450363907
DOI:10.1145/3197768
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

In-Cooperation

  • NSF: National Science Foundation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 June 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Affective Computing
  2. Emotion transition
  3. Emotional shift
  4. Facial Expression Recognition
  5. Krippendorff's alpha

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

PETRA '18

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)8
  • Downloads (Last 6 weeks)1
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2021)Context-Aware Emotion Recognition in the Wild Using Spatio-Temporal and Temporal-Pyramid ModelsSensors10.3390/s2107234421:7(2344)Online publication date: 27-Mar-2021
  • (2020)The current challenges of automatic recognition of facial expressionsAI Communications10.3233/AIC-20063133:3-6(113-138)Online publication date: 1-Jan-2020
  • (2019)Facial Expression Recognition Using Computer Vision: A Systematic ReviewApplied Sciences10.3390/app92146789:21(4678)Online publication date: 2-Nov-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media