skip to main content
10.1145/3013971.3014025acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
short-paper

Facial expressions recognition based on convolutional neural networks for mobile virtual reality

Published:03 December 2016Publication History

ABSTRACT

We present a new system designed for enabling direct face-to-face interaction for users wearing a head-mounted displays (HMD) in virtual reality environment. Due to HMD's occlusion of a user's face, VR applications and games are mainly designed for single user. Even in some multi-player games, players can only communicate with each other using audio input devices or controllers. To address this problem, we develop a novel system that allows users to interact with each other using facial expressions in real-time. Our system consists of two major components: an automatic tracking and segmenting face processing component and a facial expressions recognizing component based on convolutional neural networks (CNN). First, our system tracks a specific marker on the front surface of the HMD and then uses the extracted spatial data to estimate face positions and rotations for mouth segmentation. At last, with the help of an adaptive approach for histogram based mouth segmentation [Panning et al. 2009], our system passes the processed lips pixels' information to CNN and get the facial expressions results in real-time. The results of our experiments show that our system can effectively recognize the basic expressions of users.

References

  1. Burkert, P. Trier, F. Afzal, M. Dengel, A. Liwicki, M, 2015. DeXpression: Deep convolutional neural network for expression recognition. arXiv preprint, pp.1--8.Google ScholarGoogle Scholar
  2. Cootes, Timothy, F. and Taylor, C.J., 2004. Statistical Models of Appearance for Computer VisionGoogle ScholarGoogle Scholar
  3. Ekman, P. and Wallace V. Friesen, 1975. Pictures of facial affect. consulting psychologists press.Google ScholarGoogle Scholar
  4. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. ImageNet Classification with Deep Convolutional Neural Networks. Advances In Neural Information Processing Systems, pp.1--9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Le Cun, Y. Jackel, l. d. Boser, b. Denker, J. S. Graf, H. P. Guyon, I. Henderson, D. Howard, R. E. Hubbard, W., 1989. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Communications Magazine, 27(11), pp.41--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Li, H. Trutoiu, L. Olszewski, K. Wei, L. Trutna, T. Hsieh, P. Nicolls, A. Ma, C., 2015. Facial performance sensing head-mounted display. ACM Transactions on Graphics, 34(4), p.47:1--47:9. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Mamic, G., Fookes, C. and Sridharan, S., 2006. What is the average human face? Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 4319 LNCS, pp.692--701. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Marquardt BeautyAnalysis, Inc. The Facial Masks http://www.beautyanalysis.com/research/perfect-face/facial-masks/Google ScholarGoogle Scholar
  9. Panning, A. Niese, R. Al-hamadi, A. Michaelis, B., 2009. A new Adaptive Approach for Histogram based Mouth Segmentation. Engineering and Technology, 3(8), pp.779--784.Google ScholarGoogle Scholar
  10. Saeed, U. and Dugelay, J.L., 2010. Combining edge detection and region segmentation for lip contour extraction. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6169 LNCS, pp.11--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Tian, Y.T.Y., 2004. Evaluation of Face Resolution for Expression Analysis. 2004 Conference on Computer Vision and Pattern Recognition Workshop, pp.0--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Werda, S., Mahdi, W. and Hamadou, A. Ben, 2007. Colour and geometric based model for lip localisation: Application for lip-reading system. Proceedings - 14th International conference on Image Analysis and Processing, ICIAP 2007, (Iciap), pp.9--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Zhai, Y. and Shah, M., 2006. Visual Attention Detection in Video Sequences Using Spatiotemporal Cues Categories and Subject Descriptors. Proceedings of the 14th annual ACM international conference on Multimedia, 32816, pp.815--824. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Facial expressions recognition based on convolutional neural networks for mobile virtual reality

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      VRCAI '16: Proceedings of the 15th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry - Volume 1
      December 2016
      381 pages
      ISBN:9781450346924
      DOI:10.1145/3013971
      • Conference Chairs:
      • Yiyu Cai,
      • Daniel Thalmann

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 3 December 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      Overall Acceptance Rate51of107submissions,48%

      Upcoming Conference

      SIGGRAPH '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader