skip to main content
10.1145/1168987.1169002acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
Article

American sign language recognition in game development for deaf children

Published: 23 October 2006 Publication History

Abstract

CopyCat is an American Sign Language (ASL) game, which uses gesture recognition technology to help young deaf children practice ASL skills. We describe a brief history of the game, an overview of recent user studies, and the results of recent work on the problem of continuous, user-independent sign language recognition in classroom settings. Our database of signing samples was collected from user studies of deaf children playing aWizard of Oz version of the game at the Atlanta Area School for the Deaf (AASD). Our data set is characterized by disfluencies inherent in continuous signing, varied user characteristics including clothing and skin tones, and illumination changes in the classroom. The dataset consisted of 541 phrase samples and 1,959 individual sign samples of five children signing game phrases from a 22 word vocabulary. Our recognition approach uses color histogram adaptation for robust hand segmentation and tracking. The children wear small colored gloves with wireless accelerometers mounted on the back of their wrists. The hand shape information is combined with accelerometer data and used to train hidden Markov models for recognition. We evaluated our approach by using leave-one-out validation; this technique iterates through each child, training on data from four children and testing on the remaining child's data. We achieved average word accuracies per child ranging from 91.75% to 73.73% for the user-independent models.

References

[1]
B. Bauer, H. Hienz, and K. Kraiss. Video-based continuous sign language recognition using statistical methods. In Proceedings of the 15th International Conference on Pattern Recognition, volume 2, pages 463--466, September 2000.
[2]
H. Brashear, T. Starner, P. Lukowicz, and H. Junker. Using multiple sensors for mobile sign language recognition. In Proceedings of the Seventh IEEE International Symposium on Wearable Computers, pages 45--52, 2003.
[3]
A. Dix, J. Finlay, G. Abowd, and R. Beale. Human-Computer Interaction, chapter 6.4 Iterative Design and Prototyping. Prentice Hall, 2004.
[4]
G. Fang, W. Gao, and D. Zhao. Large vocabulary sign language recognition based on hierarchical decision trees. In International Conference on Multimodal Interfaces, pages 125--131, 2003.
[5]
Gallaudet. Gallaudet University. Regional and national summary report of data from the 1999-2000 annual survey of deaf and hard of hearing children and youth. Washington, D. C., 2001.
[6]
W. Gao, G. Fang, D. Zhao, and Y. Chen. Transition movement models for large vocabulary continuous sign language recognition (csl). In Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 553--558, 2004.
[7]
V. Henderson, S. Lee, H. Brashear, H. Hamilton, T. Starner, and S. Hamilton. Development of an American sign language game for deaf children. In Proceedings of the 4th International Conference for Interaction Design and Children, Boulder, CO, 2005.
[8]
J. L. Hernandez-Rebollar, N. Kyriakopoulos, and R. W. Lindeman. A new instrumented approach for translating American sign language into sound and text. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 547--552, 2004.
[9]
http://htk.eng.cam.ac.uk.
[10]
IDRT. Con-sign-tration. Product Information on the World Wide Web, Institute for Disabilities Research and Training Inc.,http://www.idrt.com/ProductInfo.php?ID=32&u=1, 1999.
[11]
J. S. Kim, W. Jang, and Z. Bien. A dynamic gesture recognition system for the Korean sign language KSL. IEEE Transactions on Systems, Man and Cybernetics, 26(2):354--359, 1996.
[12]
S. Lee, V. Henderson, H. Hamilton, T. Starner, H. Brashear, and S. Hamilton. A gesture-based American sign language game for deaf children. In Proceedings of CHI, pages 1589--1592, Portland, Oregon, 2005.
[13]
R. Liang and M. Ouhyoung. A real-time continuous gesture recognition system for sign language. In Third International Conference on Automatic Face and Gesture Recognition, pages 558--565, 1998.
[14]
R. I. Mayberry and E. B. Eichen. The long-lasting advantage of learning sign language in childhood: Another look at the critical period for language acquisition. Journal of Memory and Language, 30:486--498, 1991.
[15]
R. M. McGuire, J. Hernandez-Rebollar, T. Starner, V. Henderson, H. Brashear, and D. S. Ross. Towards a one-way American sign language translator. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, pages 620--625, 2004.
[16]
E. L. Newport. Maturational constraints on language learning. Cognitive Science, 14:11--28, 1990.
[17]
N. Oliver, A. Pentland, and F. Berard. Lafter: Lips and face real time tracker. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 123--129, 1997.
[18]
G. Pollard and TSD. Texas School for the Deaf. Aesop: Four Fables, 1998.
[19]
D. Roberts, U. Foehr, V. Rideout, and M. Brodie. Kids and Media @ the New Millennium, 1999.
[20]
H. Sagawa and M. Takeuchi. A method for recognizing a sequence of sign language words represented in a japanese sign language sentence. In Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pages 434--439, Grenoble, France, March 2000.
[21]
L. Sigal, S. Sclaroff, and V. Athitsos. Skin color-based video segmentation under time-varying illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(7):862--877, 2004.
[22]
T. Starner and A. Pentland. Visual recognition of American sign language using hidden markov models. In Proceedings of the International Workshop on Automatic Face and Gesture Recognition, 1995.
[23]
M. Storring, H. Andersen, and E. Granum. Skin colour detection under changing lighting conditions. In Proceedings of the Seventh Symposium on Intelligent Robotics Systems, pages 187--195, 1999.
[24]
M. Storring, H. Andersen, and E. Granum. Estimation of the illuminant colour from human skin colour. In Proceedings of International Conference on Automatic Face and Gesture Recognition, pages 64--69, 2000.
[25]
C. Vogler and D. Metaxas. ASL recognition based on a coupling between hmms and 3d motion analysis. In Proceedings of the IEEE International Conference on Computer Vision, pages 363--369, 1998.
[26]
C. Vogler and D. Metaxas. Handshapes and movements: Multiple-channel american sign language recognition. In Springer Lecture notes in Artificial Intelligence, volume 2915, pages 247--258, January 2004.
[27]
T. Westeyn, H. Brashear, A. Atrash, and T. Starner. Georgia tech gesture toolkit: Supporting experiments in gesture recognition. In ICMI '03: Proceedings of the 5th International Conference on Multimodal Interfaces, pages 85--92, New York, NY, USA, 2003. ACM Press.
[28]
J. Yang, L. Weier, and A. Waibel. Skin-color modeling and adaptation. In Proceedings of Asian Conference on Computer Vision, pages 687--694, 1998.

Cited By

View all
  • (2024)Breaking Barriers: Real-Time Sign Language Recognition Using LSTM Networks for Enhanced Communication Accessibility2024 IEEE International Conference on Advanced Systems and Emergent Technologies (IC_ASET)10.1109/IC_ASET61847.2024.10596214(1-6)Online publication date: 27-Apr-2024
  • (2024)Signed expressions spotting and recognition in an assistive technology systemProcedia Computer Science10.1016/j.procs.2024.06.280239(1141-1148)Online publication date: 2024
  • (2023)Empowering Deaf-Hearing Communication: Exploring Synergies between Predictive and Generative AI-Based Strategies towards (Portuguese) Sign Language InterpretationJournal of Imaging10.3390/jimaging91102359:11(235)Online publication date: 25-Oct-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
Assets '06: Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
October 2006
316 pages
ISBN:1595932909
DOI:10.1145/1168987
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 October 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. ASL
  2. game
  3. recognition
  4. sign language

Qualifiers

  • Article

Conference

ASSETS06
Sponsor:

Acceptance Rates

Overall Acceptance Rate 436 of 1,556 submissions, 28%

Upcoming Conference

ASSETS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)53
  • Downloads (Last 6 weeks)0
Reflects downloads up to 15 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Breaking Barriers: Real-Time Sign Language Recognition Using LSTM Networks for Enhanced Communication Accessibility2024 IEEE International Conference on Advanced Systems and Emergent Technologies (IC_ASET)10.1109/IC_ASET61847.2024.10596214(1-6)Online publication date: 27-Apr-2024
  • (2024)Signed expressions spotting and recognition in an assistive technology systemProcedia Computer Science10.1016/j.procs.2024.06.280239(1141-1148)Online publication date: 2024
  • (2023)Empowering Deaf-Hearing Communication: Exploring Synergies between Predictive and Generative AI-Based Strategies towards (Portuguese) Sign Language InterpretationJournal of Imaging10.3390/jimaging91102359:11(235)Online publication date: 25-Oct-2023
  • (2023)SignIt! An Android Game for Sign Bilingual PlayProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614484(1-4)Online publication date: 22-Oct-2023
  • (2023)Multi-Modal Multi-Channel American Sign Language RecognitionInternational Journal of Artificial Intelligence and Robotics Research10.1142/S297233532450001701:01Online publication date: 20-Dec-2023
  • (2022)Towards Accessible Sign Language Assessment and LearningProceedings of the 2022 International Conference on Multimodal Interaction10.1145/3536221.3556623(626-631)Online publication date: 7-Nov-2022
  • (2022)Signing-on-the-Fly: Technology Preferences to Reduce Communication Gap between Hearing Parents and Deaf ChildrenProceedings of the 21st Annual ACM Interaction Design and Children Conference10.1145/3501712.3529741(26-36)Online publication date: 27-Jun-2022
  • (2022)SignFind: A Synchronized Sign Language and Chinese Character Teaching Game for Chinese Deaf Children Using Gesture RecognitionExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519697(1-7)Online publication date: 27-Apr-2022
  • (2022)Advanced Smart Gloves for Physically Disabled Persons Using LABVIEWInnovations in Computer Science and Engineering10.1007/978-981-16-8987-1_12(107-117)Online publication date: 26-Mar-2022
  • (2021)SonicASLProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34635195:2(1-30)Online publication date: 24-Jun-2021
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media