skip to main content
10.1145/1282280.1282339acmconferencesArticle/Chapter ViewAbstractPublication PagescivrConference Proceedingsconference-collections
Article

Distributing expressional faces in 2-D emotional space

Published: 09 July 2007 Publication History

Abstract

Facial expressions are often classified into one of several basic emotion categories. This categorical approach seems improper to treat faces with blended emotion, as well as hard to measure the intensity of an emotion. In this paper facial expressions are evaluated with dimensional approach of affect that was originally introduced by psycho-physiologic study. An expressional face can be represented as a point in a two-dimensional (2-D) emotional space characterized by arousal and valence factors. To link low-level face features with emotional factors, we propose a simple method that builds an emotional mapping by a coarse labeling on Cohn-Kanade database and a linear fitting on the labeled data. Our preliminary experimental result shows that the proposed emotional mapping can be used to visualize the distribution of affective content in a large face set and further retrieval expressional face images or relevant video shots by specifying a region in the 2-D emotional space.

References

[1]
C. Breazeal. Robot in society: Friend or appliance? In Agents'99 workshop on emotion-based agent architectures, pages 18--26, 1999.
[2]
C. Chan and G. Jones. Affect-based indexing and retrieval of films. In Proceedings of the 13th annual ACM international conference on Multimedia, pages 427--430, 2005.
[3]
Y. Chang, C. Hu, and M. Turk. Manifold of facial expression. In IEEE International Workshop on Analysis and Modeling of Faces and Gestures, 2003.
[4]
I. Cohen, N. Sebe, L. Chen, A. Garg, and T. Huang. Facial expression recognition from video sequences: temporal and static modeling. Computer Vision and Image Understanding, 91(1):160--187, 2003.
[5]
T. Cootes, C. Taylor, and D. C. J. Graham. Active shape models - their training and applications. Computer Vision and Image Understanding, 61(1):38--59, 1995.
[6]
B. Detenber, R. Simons, and G. Bennett. Roll 'em!: The effects of picture motion on emotional responses. J. Broadcasting and Electron. Media, 21:112--126, 1997.
[7]
R. Dietz and A. Lang. Affective agents: Effects of agent affect on arousal, attention, liking and learning. In Proceedings of the Third International Cognitive Technology Conference, 1999.
[8]
F. Dornaika and F. Davoine. Simultaneous facial action tracking and expression recognition using a particle filter. In IEEE International Conference on Computer Vision, pages 1733--1738, 2005.
[9]
P. Ekman and W. Friesen. Constants across cultures in the face and emotion. Journal of personality and social psychology, 17(2):124--129, 1971.
[10]
P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, 1978.
[11]
B. Fasel and J. Luettin. Automatic facial expression analysis: A survey. Pattern Recognition, 36(1):259--275, 2003.
[12]
M. Greenwald, E. Cook, and P. Lang. Affective judgment and psychophysiological response: Dimensional covariation in the evaluation of pictorial stimuli. J. Psychophysiol., 3:51--64, 1989.
[13]
M. Grimm, D. Dastidar, and K. Kroschel. Recognizing emotions in spontaneous facial expressions. In Proceedings: International Conference on Intelligent Systems and Computing (ISYC), 2006.
[14]
A. Hanjalic. Extracting moods from pictures and sounds: towards truly personalized tv. IEEE Signal Processing Magazine, 23(2):90--100, 2006.
[15]
A. Hanjalic and L.-Q. Xu. Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1):143--154, 2005.
[16]
C. Huang, H. Ai, and et al. Vector boosting for rotation invariant multi-view face detection. In Proceedings IEEE International Conference on Computer Vision, pages 446--453, 2005.
[17]
T. Kanade, J. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In Proceedings of the Fourth International Conference of Face and Gesture Recognition, pages 46--53, 2000.
[18]
A. Lanitis, C. Taylor, and T. Cootes. Automatic interpretation and coding of face images using flexible models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):743--756, 1997.
[19]
A. Lanitis, C. Taylor, and T. Cootes. Toward automatic simulation of aging effects on face images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4):442--455, 2002.
[20]
M. J. Lyons, J. Budynek, and S. Akamatsu. Automatic classification of single facial images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(12):1357--1362, 1999.
[21]
T. Ojala, M. Pietikinen, and T. Menp. Multiresolution grayscale and rotation invariant texture classification with local binary patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(7):971--987, 2002.
[22]
M. Pantic and L. Rothkrantz. Automatic analysis of facial expressions: The state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1424--1445, 2000.
[23]
M. Pantic, M. Valstar, R. Rademaker, and L. Maat. Web-based database for facial expression analysis. In Proc. IEEE Int'l Conf. Multmedia and Expo (ICME'05), 2005.
[24]
J. Russell and A. Mehrabian. Evidence for a three-factor theory of emotions. J. Res. Personality, 11:273--294, 1977.
[25]
C. Shan, S. Gong, and P. McOwan. Appearance manifold of facial expression. In ICCV workshop on HCI, 2005.
[26]
C. Shan, S. Gong, and P. McOwan. Robust facial expression recognition using local binary patterns. In IEEE International Conference on Image Processing, pages 370--373, 2005.
[27]
C. Shan, S. Gong, and P. McOwan. Dynamic facial expression recognition using a bayesian temporal manifold model. In Proc. British Machine Vision Conference, pages 84--89, 2006.
[28]
J. Tenenbaum, V. Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 260:2319--2323, 2000.
[29]
Y. Tian, T. Kanade, and J. Cohn. Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2):97--115, 2001.
[30]
M. Valstar and M. Pantic. Biologically vs. logic inspired encoding of facial actions and emotions in video. In Proc. Int'l Conf. on Multimedia and Expo, pages 325--328, 2006.
[31]
D. Yang, T. Kunihiro, H. Shimoda, and H. Yoshikawa. A study of real-time image processing method for treating human emotion by facial expression. In Proceedings IEEE International Conference on Systems, Man, and Cybernetics, pages 360--364, 1999.
[32]
M. Yeasin, B. Bullot, and R. Sharma. Recognition of facial expressions and measurement of levels of interest from video. IEEE Transactions on Multimedia, 8(3):500--508, 2006.
[33]
L. Zhang, H. Ai, and et al. Robust face alignment based on local texture classifiers. In IEEE International Conference on Image Processing, 2005.

Cited By

View all
  • (2018)Optimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbourInternational Journal of Communication Networks and Distributed Systems10.1504/IJCNDS.2016.07909617:2(103-119)Online publication date: 23-Dec-2018
  • (2018)Facial emotional classificationPattern Analysis & Applications10.1007/s10044-012-0286-616:1(41-54)Online publication date: 24-Dec-2018
  • (2015)From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect SensingEmotion Recognition10.1002/9781118910566.ch18(461-491)Online publication date: 2-Jan-2015
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CIVR '07: Proceedings of the 6th ACM international conference on Image and video retrieval
July 2007
655 pages
ISBN:9781595937339
DOI:10.1145/1282280
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 July 2007

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. emotional space
  2. facial expression analysis
  3. image retrieval
  4. least square fitting

Qualifiers

  • Article

Conference

CIVR07
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2018)Optimal prototype selection for speech emotion recognition using fuzzy k-important nearest neighbourInternational Journal of Communication Networks and Distributed Systems10.1504/IJCNDS.2016.07909617:2(103-119)Online publication date: 23-Dec-2018
  • (2018)Facial emotional classificationPattern Analysis & Applications10.1007/s10044-012-0286-616:1(41-54)Online publication date: 24-Dec-2018
  • (2015)From a Discrete Perspective of Emotions to Continuous, Dynamic, and Multimodal Affect SensingEmotion Recognition10.1002/9781118910566.ch18(461-491)Online publication date: 2-Jan-2015
  • (2012)ReferencesMultimedia Information Extraction10.1002/9781118219546.refs(425-460)Online publication date: 24-Aug-2012
  • (2011)Emotional facial sensing and multimodal fusion in a continuous 2D affective spaceJournal of Ambient Intelligence and Humanized Computing10.1007/s12652-011-0087-63:1(31-46)Online publication date: 30-Oct-2011
  • (2010)Sensing facial emotions in a continuous 2D affective space2010 IEEE International Conference on Systems, Man and Cybernetics10.1109/ICSMC.2010.5641717(2045-2051)Online publication date: Oct-2010
  • (2010)Facial animation retargeting and control based on a human appearance spaceComputer Animation and Virtual Worlds10.1002/cav.33121:1(39-54)Online publication date: Feb-2010
  • (2009)Automatic design of a control interface for a synthetic faceProceedings of the 14th international conference on Intelligent user interfaces10.1145/1502650.1502681(207-216)Online publication date: 8-Feb-2009
  • (2008)PAD Model Based Facial Expression AnalysisProceedings of the 4th International Symposium on Advances in Visual Computing, Part II10.1007/978-3-540-89646-3_44(450-459)Online publication date: 1-Dec-2008

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media