skip to main content
10.1145/2388676.2388792acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

LUI: lip in multimodal mobile GUI interaction

Authors Info & Claims
Published:22 October 2012Publication History

ABSTRACT

Gesture based interactions are commonly used in mobile and ubiquitous environments. Multimodal interaction techniques use lip gestures to enhance speech recognition or control mouse movement on the screen. In this paper we extend the previous work to explore LUI: lip gestures as an alternative input technique for controlling the user interface elements in a ubiquitous environment. In addition to use lips to control cursor movement, we use lip gestures to control music players and activate menus. A LUI Motion-Action library is also provided to guide future interaction design using lip gestures.

References

  1. Çetingül, H. E., Yemez, Y., Erzin, E. and Tekalp, A. M. Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading. IEEE Transactions on Image Processing 15(10): 2879--2891 (2006). Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Dalka, P. and Czyzewski, A. Human-Computer Interface Based on Visual Lip Movement and Gesture Recognition. IJCSA 7(3): 124--139 (2010).Google ScholarGoogle Scholar
  3. Dupont, S. and Luettin, J. Audio-Visual Speech Modeling for Continuous Speech Recognition. IEEE Transactions on Multimedia 2(3): 141--151 (2000). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kanade, T., Tian, Y. and Cohn, J. F. Comprehensive Database for Facial Expression Analysis. FG 2000: 46--53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Kaynak, M. N., Zhi, Q., Cheok, A. D., Sengupta, K., Zhang, J. and Ko, C. C. Analysis of lip geometric features for audio-visual speech recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part A 34(4): 564--570 (2004). Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Matthews, I., Cootes,T. F., Bangham, J. A., Cox, S. and Harvey, R. Extraction of Visual Features for Lipreading. IEEE Trans. Pattern Anal. Mach. Intell. 24(2): 198--213 (2002). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Pantic, M. and Rothkrantz, L. J. M. Automatic Analysis of Facial Expressions: The State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 22(12): 1424--1445 (2000). Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Tian, Y., Kanade, T. and Cohn, J. F. Recognizing Lower Face Action Units for Facial Expression Analysis. FG 2000: 484--490. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Tian, Y., Kanade, T., and Cohn, J. F. Robust lip tracking by combining shape, color and motion, In Proc. of ACCV '00, 2000,1040--1045.Google ScholarGoogle Scholar
  10. Tu, J., Tao, H. and Huang, T. S. Face as mouse through visual face tracking. Computer Vision and Image Understanding 108(1-2): 35--40 (2007). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Zhao, S., Dragicevic, P., Chignell, M. H., Balakrishnan, R., Baudisch, P. earpod: Eyes-free menu selection using touch input and reactive audio feedback. In Proc. of CHI 2007: 1395--1404 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. LUI: lip in multimodal mobile GUI interaction

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '12: Proceedings of the 14th ACM international conference on Multimodal interaction
      October 2012
      636 pages
      ISBN:9781450314671
      DOI:10.1145/2388676

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 October 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader