ABSTRACT
Over 90 percent of Deaf and Hard of Hearing (DHH) children in the United States are born to hearing parents, who have little to no command of American Sign Language (ASL). This leaves the majority of DHH children at risk of language deprivation in early childhood. This study investigates the design space of Augmented Reality (AR) and wearable technologies in supporting hearing parents to offer sign language environments for young DHH children. We conducted an online survey with 65 participants (hearing/DHH parents and teachers of DHH children aged 6 months to 5 years) to gather preferences and interests of technologies that support hearing parents to deliver ASL on-the-fly, and stay attentive to the DHH child’s visual attention during joint toy play. We found that Near-Object Projection is most preferred for real-time ASL delivery, and haptic feedback is most preferred for raising the parent’s awareness of a child’s attention. Results also show a strong interest in using the proposed technologies in interacting with and maintaining joint attention with DHH children on a daily basis. We discuss key design recommendations that inform the design of future technologies that support just-in-time and contextual-aware communication in ASL, with minimal obtrusion to face-to-face interaction.
Supplemental Material
- Sedeeq Al-Khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, and Matt Huenerfauth. 2018. Modeling the speed and timing of American Sign Language to generate realistic animations. In Proceedings of the 20th international ACM SIGACCESS conference on computers and accessibility. 259–270.Google ScholarDigital Library
- Danielle Bragg, Nicholas Huynh, and Richard E Ladner. 2016. A personalizable mobile sound detector app design for deaf and hard-of-hearing users. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 3–13.Google ScholarDigital Library
- Helene Brashear, Valerie Henderson, Kwang-Hyun Park, Harley Hamilton, Seungyon Lee, and Thad Starner. 2006. American sign language recognition in game development for deaf children. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility. 79–86.Google ScholarDigital Library
- Qi Cheng, Austin Roth, Eric Halgren, and Rachel I Mayberry. 2019. Effects of early language deprivation on brain connectivity: Language pathways in deaf native and late first-language learners of American Sign Language. Frontiers in Human Neuroscience 13 (2019), 320.Google ScholarCross Ref
- Kathryn Davidson, Diane Lillo-Martin, and Deborah Chen Pichler. 2014. Spoken English language development among native signing children with cochlear implants. The Journal of Deaf Studies and Deaf Education 19, 2 (2014), 238–250.Google ScholarCross Ref
- Nicole Depowski, Homer Abaya, John Oghalai, and Heather Bortfeld. 2015. Modality use in joint attention between hearing parents and deaf children. Frontiers in psychology 6 (2015), 1556.Google Scholar
- Leah Findlater, Bonnie Chinh, Dhruv Jain, Jon Froehlich, Raja Kushalnagar, and Angela Carey Lin. 2019. Deaf and hard-of-hearing individuals’ preferences for wearable and mobile sound awareness technologies. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- William W Gaver, Andrew Boucher, Sarah Pennington, and Brendan Walker. 2004. Cultural probes and the value of uncertainty. interactions 11, 5 (2004), 53–56.Google Scholar
- Connor Graham, Mark Rouncefield, Martin Gibbs, Frank Vetere, and Keith Cheverst. 2007. How probes work. In Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces. 29–37.Google ScholarDigital Library
- Matthew L Hall, Wyatte C Hall, and Naomi K Caselli. 2019. Deaf children need language, not (just) speech. First Language 39, 4 (2019), 367–395.Google ScholarCross Ref
- Wyatte C Hall. 2017. What you don’t know can hurt you: The risk of language deprivation by impairing sign language development in deaf children. Maternal and child health journal 21, 5 (2017), 961–965.Google Scholar
- Tom Humphries, Poorna Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott Smith. 2016. Avoiding linguistic neglect of deaf children. Social Service Review 90, 4 (2016), 589–619.Google ScholarCross Ref
- Tom Humphries, Poorna Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, and Scott R Smith. 2012. Language acquisition for deaf children: Reducing the harms of zero tolerance to the use of alternative approaches. Harm Reduction Journal 9, 1 (2012), 1–9.Google ScholarCross Ref
- Sabine Hunnius and Reint H Geuze. 2004. Gaze shifting in infancy: A longitudinal study using dynamic faces and abstract stimuli. Infant Behavior and Development 27, 3 (2004), 397–416.Google ScholarCross Ref
- Brandon Huynh, Jason Orlosky, and Tobias Höllerer. 2019. In-situ labeling for augmented reality language learning. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 1606–1611.Google ScholarCross Ref
- Inseok Hwang, Chungkuk Yoo, Chanyou Hwang, Dongsun Yim, Youngki Lee, Chulhong Min, John Kim, and Junehwa Song. 2014. TalkBetter: family-driven mobile intervention care for children with language delay. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 1283–1296.Google ScholarDigital Library
- Adam Ibrahim, Brandon Huynh, Jonathan Downey, Tobias Höllerer, Dorothy Chun, and John O’donovan. 2018. Arbis pictus: A study of vocabulary learning with augmented reality. IEEE transactions on visualization and computer graphics 24, 11(2018), 2867–2874.Google Scholar
- Carla Wood Jackson, Randi J Traub, and Ann P Turnbull. 2008. Parents’ experiences with childhood deafness: Implications for family-centered services. Communication disorders quarterly 29, 2 (2008), 82–98.Google Scholar
- Dhruv Jain, Leah Findlater, Jamie Gilkeson, Benjamin Holland, Ramani Duraiswami, Dmitry Zotkin, Christian Vogler, and Jon E Froehlich. 2015. Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 241–250.Google ScholarDigital Library
- Dhruv Jain, Angela Lin, Rose Guttman, Marcus Amalachandran, Aileen Zeng, Leah Findlater, and Jon Froehlich. 2019. Exploring sound awareness in the home for people who are deaf or hard of hearing. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.Google ScholarDigital Library
- Yoshihiro Kaneko, Inho Chung, and Kenji Suzuki. 2013. Light-emitting device for supporting auditory awareness of hearing-impaired people during group conversations. In 2013 IEEE International Conference on Systems, Man, and Cybernetics. IEEE, 3567–3572.Google ScholarDigital Library
- Marina Krcmar. 2011. Word learning in very young children from infant-directed DVDs. Journal of Communication 61, 4 (2011), 780–794.Google ScholarCross Ref
- Richard E Mayer and Valerie K Sims. 1994. For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning.Journal of educational psychology 86, 3 (1994), 389.Google Scholar
- Ross E Mitchell and Michaela Karchmer. 2004. Chasing the mythical ten percent: Parental hearing status of deaf and hard of hearing students in the United States. Sign language studies 4, 2 (2004), 138–163.Google Scholar
- Joseph J Murray. 2015. Linguistic human rights discourse in deaf community activism. Sign Language Studies 15, 4 (2015), 379–410.Google ScholarCross Ref
- S Nittrouer. 2010. Early development of children with hearing loss.Google Scholar
- Alex Olwal, Kevin Balke, Dmitrii Votintcev, Thad Starner, Paula Conn, Bonnie Chinh, and Benoit Corda. 2020. Wearable subtitles: Augmenting spoken communication with lightweight eyewear for all-day captioning. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 1108–1120.Google ScholarDigital Library
- Becky Sue Parton. 2017. Glass vision 3D: digital discovery for the deaf. TechTrends 61, 2 (2017), 141–146.Google ScholarCross Ref
- Anne Marie Piper and James D Hollan. 2008. Supporting medical conversations between deaf and hearing individuals with tabletop displays. In Proceedings of the 2008 ACM conference on Computer supported cooperative work. 147–156.Google ScholarDigital Library
- Doireann T Renzi, Alexa R Romberg, Donald J Bolger, and Rochelle S Newman. 2017. Two minds are better than one: Cooperative communication as a new framework for understanding infant language learning.Translational Issues in Psychological Science 3, 1 (2017), 19.Google Scholar
- Brian Scassellati, Jake Brawer, Katherine Tsui, Setareh Nasihati Gilani, Melissa Malzkuhn, Barbara Manini, Adam Stone, Geo Kartheiser, Arcangelo Merla, Ari Shapiro, 2018. Teaching language to deaf infants with a robot and a virtual human. In Proceedings of the 2018 CHI Conference on human Factors in computing systems. 1–13.Google ScholarDigital Library
- Qijia Shao, Amy Sniffen, Julien Blanchet, Megan E Hillis, Xinyu Shi, Themistoklis K Haris, Jason Liu, Jason Lamberton, Melissa Malzkuhn, Lorna C Quandt, 2020. Teaching American Sign Language in Mixed Reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1–27.Google ScholarDigital Library
- Jenny L Singleton and Elissa L Newport. 2004. When learners surpass their models: The acquisition of American Sign Language from inconsistent input. Cognitive psychology 49, 4 (2004), 370–407.Google Scholar
- Patricia Elizabeth Spencer. 2001. A good start: Suggestions for visual conversations with deaf and hard of hearing babies and toddlers. Laurent Clerc National Deaf Education Center, Gallaudet University.Google Scholar
- Patricia E Spencer, Barbara A Bodner-Johnson, and Mary K Gutfreund. 1992. Interacting with infants with a hearing loss: What can we learn from mothers who are deaf?Journal of Early Intervention 16, 1 (1992), 64–78.Google ScholarCross Ref
- Ashely Tenesaca, Jung Yun Oh, Crystal Lee, Wanyin Hu, and Zhen Bai. 2019. Augmenting Communication Between Hearing Parents and Deaf Children. In 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 431–434.Google Scholar
- Michael Tomasello, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll. 2005. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and brain sciences 28, 5 (2005), 675–691.Google Scholar
- Kimberly A Weaver and Thad Starner. 2011. We need to communicate! helping hearing parents of deaf children learn american sign language. In The proceedings of the 13th international ACM SIGACCESS Conference on Computers and Accessibility. 91–98.Google ScholarDigital Library
- Kristin Williams, Karyn Moffatt, Jonggi Hong, Yasmeen Faroqi-Shah, and Leah Findlater. 2016. The cost of turning heads: A comparison of a head-worn display to a smartphone for supporting persons with aphasia in conversation. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 111–120.Google ScholarDigital Library
- Chen Yu and Linda B Smith. 2013. Joint attention without gaze following: Human infants and their parents coordinate visual attention to objects through eye-hand coordination. PloS one 8, 11 (2013), e79659.Google ScholarCross Ref
Recommendations
Context-responsive ASL Recommendation for Parent-Child Interaction
ASSETS '22: Proceedings of the 24th International ACM SIGACCESS Conference on Computers and AccessibilityParental language input in early childhood plays a critical role in lifelong neuro-cognitive and social development. Deaf and Hard of Hearing (DHH) children are often at risk of language deprivation due to hearing parents’ limited knowledge of sign ...
Supporting ASL Communication Between Hearing Parents and Deaf Children
ASSETS '23: Proceedings of the 25th International ACM SIGACCESS Conference on Computers and AccessibilityThe vast majority of deaf or hard-of-hearing (DHH) children are born to hearing parents. Due to a lack of immersive exposure to their natural language - sign language - they are at severe risk of language deprivation. In response to this challenge, this ...
Parent and child problematic media use: The role of maternal postpartum depression and dysfunctional parent-child interactions in young children
AbstractProblematic media use, or media use that interferes with daily functioning, is most often studied in adolescent or young adult age groups. Less research has examined problematic media use within the family system, among parents and ...
Highlights- Problematic media use is seen in children as young as 3–4 years old and in their parents.
Comments