skip to main content
10.1145/1647314.1647387acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

A framework for continuous multimodal sign language recognition

Published: 02 November 2009 Publication History

Abstract

We present a multimodal system for the recognition of manual signs and non-manual signals within continuous sign language sentences. In sign language, information is mainly conveyed through hand gestures (Manual Signs). Non-manual signals, such as facial expressions, head movements, body postures and torso movements, are used to express a large part of the grammar and some aspects of the syntax of sign language. In this paper we propose a multichannel HMM based system to recognize manual signs and non-manual signals. We choose a single non-manual signal, head movement, to evaluate our framework when recognizing non-manual signals. Manual signs and non-manual signals are processed independently using continuous multidimensional HMMs and a HMM threshold model. Experiments conducted demonstrate that our system achieved a detection ratio of 0.95 and a reliability measure of 0.93.

References

[1]
M. Assan and K. Grobel. Video-based sign language recognition using hidden markov models. In Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction, pages 97--109, London, UK, 1998. Springer--Verlag.
[2]
B. Bahan. Nonmanual Realisation of Agreement in American sign language. PhD thesis, University of California, Berkely, 1996.
[3]
C. Baker-Shenk. Factors affecting the form of question signals in asl. Diversity and Diachrony, 1986.
[4]
B. Bauer and K.-F. Kraiss. Towards an automatic sign language recognition system using subunits. In GW '01: Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction, pages 64--75, London, UK, 2002. Springer-Verlag.
[5]
B. Bauer and K.-F. Kraiss. Video-based sign recognition using self-organizing subunits. Pattern Recognition, 2002, 2:434--437 vol.2, 2002.
[6]
S. C., W. Ong, and S. Ranganath. Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. PAMI, 27(6):873--891, 2005.
[7]
D. Comaniciu, V. Ramesh, and P. Meer. Real-time tracking of non-rigid objects using mean shift. Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, 2:142--149 vol.2, 2000.
[8]
Y. Cui and J. Weng. Appearance-based hand sign recognition from intensity image sequences. CVIU, 78(2):157--176, 2000.
[9]
L. Ding and A. Martinez. Modelling and recognition of the linguistic components in american sign language. Journal of Image and Vision Computing, In Press, 2009.
[10]
W. Gao, G. Fang, D. Zhao, and Y. Chen. Transition movement models for large vocabulary continuous sign language recognition. IEEE FG 2004, pages 553--558, May 2004.
[11]
R. Grossman and J. Kegl. Moving faces: Categorization of dynamic facial expressions in american sign language by deaf and hearing participants. Journal of Nonverbal Behavior, 31(1):23--38, 2007.
[12]
R. B. Grossman and J. Kegl. To capture a face: A novel technique for the analysis and quantification of facial expressions in american sign language, 2006.
[13]
E.-J. Holden and O. Robyn. Visual sign language recognition. Mutli-Image Analysis, 2001.
[14]
C.-L. Huang and W.-Y. Huang. Sign language recognition using model-based tracking and a 3d hopfield neural network. Mach. Vision Appl., 10(5-6):292--307, 1998.
[15]
K. Imagawa, H. Matsuo, R. Taniguchi, D. Arita, S. Lu, and S. Igi. Recognition of local features for camera-based sign language recognition system. In ICPR '00, page 4849, Washington, DC, USA, 2000. IEEE Computer Society.
[16]
D. Kelly, J. McDonald, T. Lysaght, and C. Markham. Analysis of sign language gestures using size functions and principal component analysis. In IMVIP 2008, 2008.
[17]
D. Kelly, J. McDonald, and C. Markham. Recognizing spatiotemporal gestures and movement epenthesis in sign language. In IMVIP 2009, 2009.
[18]
T. Kobayashi and S. Haruyama. Partly-hidden markov model and its application to gesture recognition. In ICASSP '97, page 3081, Washington, DC, USA, 1997. IEEE Computer Society.
[19]
H. K. Lee and J. H. Kim. An hmm-based threshold model approach for gesture recognition. IEEE PAMI, 21(10):961--973, 1999.
[20]
R. H. Liang and M. Ouhyoung. A real-time continuous gesture recognition system for sign language. In IEEE FG 1998, page 558, Washington, DC, USA, 1998. IEEE Computer Society.
[21]
J. R. Liddell, S.K. American sign language: The phonological base. Sign Langauge Studies, 64.
[22]
L. A.-C. M. Castrillon-Santana, O. Deniz-Suarez and J. Lorenzo-Navarro. Performance evaluation of public domain haar detectors for face and facial feature detection. VISAPP 2008, 2008.
[23]
J. Ma, W. Gao, and R. Wang. A parallel multistream model for integration of sign language recognition and lip motion. In ICMI '00: Proc of the 3rd Intl Conf on Adv in Multimodal Interfaces, pages 582--589, 2000.
[24]
H. Matsuo, S. Igi, S. Lu, Y. Nagashima, Y. Takata, and T. Teshima. The recognition algorithm with non-contact for japanese sign language using morphological analysis. In Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction, pages 273--284, London, UK, 1998. Springer-Verlag.
[25]
S. C. W. Ong and S. Ranganath. Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell., 27(6):873--891, 2005.
[26]
L. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257--286, Feb 1989.
[27]
H. Sagawa and M. Takeuchi. A method for recognizing a sequence of sign language words represented in a japanese sign language sentence. In IEEE FG 2000, page 434, Washington, DC, USA, 2000. IEEE Computer Society.
[28]
T. Starner, A. Pentland, and J. Weaver. Real-time american sign language recognition using desk and wearable computer based video. IEEE PAMI, 20(12):1371--1375, 1998.
[29]
T. Starner, J. Weaver, and A. Pentl. Real-time american sign language recognition from video using hidden markov models. IEEE PAMI, 20:1371--1375, 1998.
[30]
J. Stokoe, William C. Sign language structure: An outline of the visual communication systems of the american deaf. Journal of Deaf Studies and Deaf Education, v10 n1 p3--37 Win 2005, 2005.
[31]
S. Tamura and S. Kawasaki. Recognition of sign language motion images. Pattern Recogn., 21(4):343--353, 1988.
[32]
N. Tanibata, N. Shimada, and Y. Shirai. Extraction of hand features for recognition of sign language words. In In International Conference on Vision Interface, pages 391--398, 2002.
[33]
E. van der Kooij, O. Crasborn, and W. Emmerik. Explaining prosodic body leans in sign language of the netherlands: Pragmatics required. Journal of Pragmatics, 38, 2006. Prosody and Pragmatics.
[34]
P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. CVPR, IEEE, 1:511, 2001.
[35]
C. Vogler and S. Goldenstein. Facial movement analysis in asl. Universal Access in the Information Society, 6(4):363--374, 2008.
[36]
C. Vogler and D. Metaxas. Parallel hidden markov models for american sign language recognition. In In ICCV, pages 116--122, 1999.
[37]
C. Vogler and D. Metaxas. A framework for recognizing the simultaneous aspects of american sign language. Computer Vision and Image Understanding, 81:358--384, 2001.
[38]
U. von Agris, M. Knorr, and K.-F. Kraiss. The significance of facial features for automatic sign language recognition. pages 1--6, 2008.
[39]
U. von Agris, D. Schneider, J. Zieren, and K.-F. Kraiss. Rapid signer adaptation for isolated sign language recognition. In CVPRW '06: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop, page 159, Washington, DC, USA, 2006. IEEE Computer Society.
[40]
U. von Agris, J. Zieren, U. Canzler, B. Bauer, and K.-F. Kraiss. Recent developments in visual sign language recognition. Universal Access in the Information Society, 6(4):323--362, 2008.
[41]
C. Wang, S. Shan, and W. Gao. An approach based on phonemes to large vocabulary chinese sign language recognition. In IEEE FG 2002, page 411, Washington, DC, USA, 2002. IEEE Computer Society.
[42]
Y. Wu and T. Huang. Human hand modeling, analysis and animation in the context of hci, 1999.
[43]
Y. Wu, T. S. Huang, and N. Mathews. Vision-based gesture recognition: A review. In Lecture Notes in Computer Science, pages 103--115. Springer, 1999.
[44]
M. H. Yang, N. Ahuja, and M. Tabb. Extraction of 2d motion trajectories and its application to hand gesture recognition. IEEE PAMI., 24(8):1061--1074, 2002.

Cited By

View all
  • (2024)Enhancing Communication Accessibility: A Deep Learning Approach to Gesture Recognition for the Deaf and Mute Community2024 4th International Conference on Pervasive Computing and Social Networking (ICPCSN)10.1109/ICPCSN62568.2024.00142(842-849)Online publication date: 3-May-2024
  • (2024)Reviewing 25 years of continuous sign language recognition researchInformation Processing and Management: an International Journal10.1016/j.ipm.2024.10377461:5Online publication date: 1-Sep-2024
  • (2023)Sign Language Recognizing Using Machine LearningAdvanced Communication and Intelligent Systems10.1007/978-3-031-25088-0_35(393-400)Online publication date: 15-Feb-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI-MLMI '09: Proceedings of the 2009 international conference on Multimodal interfaces
November 2009
374 pages
ISBN:9781605587721
DOI:10.1145/1647314
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 November 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. HMM
  2. non-manual signals
  3. sign language

Qualifiers

  • Poster

Conference

ICMI-MLMI '09
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Enhancing Communication Accessibility: A Deep Learning Approach to Gesture Recognition for the Deaf and Mute Community2024 4th International Conference on Pervasive Computing and Social Networking (ICPCSN)10.1109/ICPCSN62568.2024.00142(842-849)Online publication date: 3-May-2024
  • (2024)Reviewing 25 years of continuous sign language recognition researchInformation Processing and Management: an International Journal10.1016/j.ipm.2024.10377461:5Online publication date: 1-Sep-2024
  • (2023)Sign Language Recognizing Using Machine LearningAdvanced Communication and Intelligent Systems10.1007/978-3-031-25088-0_35(393-400)Online publication date: 15-Feb-2023
  • (2022)A comprehensive survey and taxonomy of sign language researchEngineering Applications of Artificial Intelligence10.1016/j.engappai.2022.105198114(105198)Online publication date: Sep-2022
  • (2021)Towards Hybrid Multimodal Manual and Non-Manual Arabic Sign Language Recognition: mArSL Database and Pilot StudyElectronics10.3390/electronics1014173910:14(1739)Online publication date: 20-Jul-2021
  • (2021)Arabic Sign Language Recognition Using Deep Machine Learning2021 4th International Symposium on Advanced Electrical and Communication Technologies (ISAECT)10.1109/ISAECT53699.2021.9668405(1-4)Online publication date: 6-Dec-2021
  • (2019)Recognising Irish Sign Language Using Electromyography2019 International Conference on Content-Based Multimedia Indexing (CBMI)10.1109/CBMI.2019.8877421(1-4)Online publication date: Sep-2019
  • (2019)Sign Language Recognition Systems: A Decade Systematic Literature ReviewArchives of Computational Methods in Engineering10.1007/s11831-019-09384-2Online publication date: 17-Dec-2019
  • (2017)Demo: The Sound of SilenceProceedings of the 23rd Annual International Conference on Mobile Computing and Networking10.1145/3117811.3119853(462-464)Online publication date: 4-Oct-2017
  • (2017)Grammatical facial expression recognition in sign language discourseInformation Systems Frontiers10.1007/s10796-017-9765-z19:6(1243-1259)Online publication date: 1-Dec-2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media