skip to main content
10.1145/1452392.1452432acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
demonstration

IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations

Published: 20 October 2008 Publication History

Abstract

The need for language aids is pervasive in today's world. There are millions of individuals who have language and speech challenges, and these individuals require additional support for communication and language learning. We demonstrate technology to supplement common face-to-face language interaction to enhance intelligibility, understanding, and communication, particularly for those with hearing impairments. Our research is investigating how to automatically supplement talking faces with information that is ordinarily conveyed by auditory means. This research consists of two areas of inquiry: 1) developing a neural network to perform real-time analysis of selected acoustic features for visual display, and 2) determining how quickly participants can learn to use these selected cues and how much they benefit from them when combined with speechreading.

References

[1]
(http://www.cde.ca.gov/re/pn/sm/index.asp)
[2]
Breslaw, P. I., Griffiths, A. J., Wood, D. J., & Howarth, C. I. (1981). The Referential Communication Skills of Deaf Children from Different Educational Environments. Journal of Child Psychology, 22, 269--282.
[3]
Holt, J. A., Traxler, C. B., & Allen, T. E. (1997). Interpreting the scores: A user's guide to the 9th Edition Stanford Achievement Test for educators of deaf and hard-of-hearing students. Washington, DC: Gallaudet Research Institute.
[4]
Tager-Flusberg, H (2000). Language development in children with autism. Methods For Studying Language Production, pp., 313--332. New Jersey: Mahwah.
[5]
Upton, H. W. (1968). Wearable eyeglass speechreading aid. American Annals of the Deaf, 113, 222--229.
[6]
Cornett, R. O., Beadles, R., and Wilson, B. (1977). Automatic Cued Speech. Processing Aids for the Deaf, pp 224--239.
[7]
Cornett, R. O., (1967). Cued speech. American Annals of the Deaf, 112, 3--13.
[8]
Hage, C. & Leybaert, J. (2006). The effect of Cued Speech on the development of spoken language. In: P. E. Spencer & M. Marschark (Eds), Advances in the spoken language development of deaf and hard-of-hearing children. New York : Oxford University Press, pp, 193--211.
[9]
Massaro, D. W. (1998). Perceiving talking faces: From speech perception to a behavioral principle. Cambridge, Massachusetts: MIT Press.

Cited By

View all
  • (2023)Understanding and Enhancing The Role of Speechreading in Online d/DHH Communication AccessibilityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580810(1-17)Online publication date: 19-Apr-2023
  • (2018)Bridging the cross-cultural language divide through designProceedings of the 30th Australian Conference on Computer-Human Interaction10.1145/3292147.3292222(167-171)Online publication date: 4-Dec-2018

Index Terms

  1. IGlasses: an automatic wearable speech supplementin face-to-face communication and classroom situations

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '08: Proceedings of the 10th international conference on Multimodal interfaces
    October 2008
    322 pages
    ISBN:9781605581989
    DOI:10.1145/1452392
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 October 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. automatic speech supplement
    2. multimodal speech perception

    Qualifiers

    • Demonstration

    Conference

    ICMI '08
    Sponsor:
    ICMI '08: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERFACES
    October 20 - 22, 2008
    Crete, Chania, Greece

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 15 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Understanding and Enhancing The Role of Speechreading in Online d/DHH Communication AccessibilityProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580810(1-17)Online publication date: 19-Apr-2023
    • (2018)Bridging the cross-cultural language divide through designProceedings of the 30th Australian Conference on Computer-Human Interaction10.1145/3292147.3292222(167-171)Online publication date: 4-Dec-2018

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media