skip to main content
10.1145/1121241.1121291acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
Article

The effect of head-nod recognition in human-robot conversation

Published: 02 March 2006 Publication History

Abstract

This paper reports on a study of human participants with a robot designed to participate in a collaborative conversation with a human. The purpose of the study was to investigate a particular kind of gestural feedback from human to the robot in these conversations: head nods. During these conversations, the robot recognized head nods from the human participant. The conversations between human and robot concern demonstrations of inventions created in a lab. We briefly discuss the robot hardware and architecture and then focus the paper on a study of the effects of understanding head nods in three different conditions. We conclude that conversation itself triggers head nods by people in human-robot conversations and that telling participants that the robot recognizes their nods as well as having the robot provide gestural feedback of its nod recognition is effective in producing more nods.

References

[1]
Clark, H.H., Using Language. Cambridge University Press, Cambridge, 1996.
[2]
Sidner, C., Lee, C., Kidd, C., Lesh, N. and Rich, C. Explorations in Engagement for Humans and Robots, Artificial Intelligence, 166(1-2): 140--164, August, 2005.
[3]
Breazeal, C. and Aryananda, L. Recognizing affective intent in robot directed speech, Autonomous Robots, 12:1, pp. 83--104, 2002.
[4]
Miyauchi, D., Sakurai, A., Makamura, A., Kuno, Y. Active eye contact for human-robot communication. In: Proceedings of CHI 2004--Late Breaking Results. Vol. CD Disc 2. ACM Press, pp. 1099--1104, 2004.
[5]
Minato, T., MacDorman, K., Simada, M., Itakura, S., Lee, K. and Ishiguro, H. Evaluating humanlikeness by comparing responses elicited by an android and a person, Proceedings of the Second Interantional Workshop on Man-Machine Symbiotic Systems, pp. 373--383, 2002.
[6]
Breazeal, C., Brooks, A., Gray, J., Hoffman, G., Kidd, C., Lee,H., Lieberman, J., Lockerd, A., and Chilongo, D. Tutelage and collaboration for humanoid robots. International Journal of Humanoid Robotics, Vol. 1, No. 2 (2004) 315--348.
[7]
Ishiguro, H., Ono, T., Imai, M., and T.Kanda, 2003. Development of an interactive humanoid robot "Robovie"---an interdisciplinary approach. In: Jarvis, R.A., Zelinsky, A. (Eds.), Robotics Research. Springer, pp. 179--191.
[8]
Sakamoto, D., Kanda, T., Ono, T., Kamashima, M., Imai, M., and Ishiguro, H. Cooperative embodied communication emerged by interactive humanoid robots. Int. J. Human.-Computer Studies. 62(2): 247--265, 2005.
[9]
Nakano, Y., Reinstein, G., Stocky, T., and Cassell, J. Towards a model of face-to-face grounding. In: Proceedings of the 41st meeting of the Association for Computational Linguistics. Sapporo, Japan, pp. 553--561, 2003.
[10]
Gratch, J., Rickel, J., Andre, E., Badler, N., Cassell, J., and Petajan, E., 2002. Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems, 54--63.
[11]
Fujie, S. Ejiri, Y., Nakajima,K., Matsusaka, Y. and Kobayashi, T. A Conversation Robot Using Head Gesture Recognition as Para-Linguistic Information, Proc. 13th IEEE Intl. Workshop on Robot and Human Communication, RO-MAN 2004, pp.159--164, Kurashiki, Japan, Sept. 2004.
[12]
Morency, L.-P., and Darrell, T. From conversational tooltips to grounded discourse: head pose tracking in interactive dialog systems, Proceedings of the International Conference on Multi-modal Interfaces, pp. 32--37, State College, PA, October, 2004.
[13]
Morency, L.-P., Lee, C., Sidner, C., and Darrell, T. Contextual recognition of head gestures, Proceedings of the Seventh International Conference on Multimodal Interfaces (ICMI'05), pp.18--24, 2005.
[14]
Reeves, B., and Nass, C., The media equation: how people treat computers, television, and new media like real people and places, Cambridge University Press New York, NY, USA, 1996.
[15]
Lee, K. M. and Nass, C. (2003). Designing social presence of social actors in human computer interaction. Proceedings of the Computer-Human Interaction (CHI) Conference, 2003, ACM Press.
[16]
Sidner, C.L. and Dzikovska, M. Human-Robot Interaction: Engagement Between Humans and Robots for Hosting Activities. In the Proceedings of the IEEE International Conference on Multimodal Interfaces, pp. 123--128, 2002.
[17]
Rich, C.; Sidner, C.L., and Lesh, N.B., "COLLAGEN: Applying Collaborative Discourse Theory to Human-Computer Interaction", Artificial Intelligence Magazine, Winter 2001 (Vol 22, Issue 4, pps 15--25).
[18]
Morency, L.-P., Sidner, C., and Darrell, T. Towards context-based visual feedback Recognition for Embodied Agents, Proceedings of the Symposium on conversational Informatics for Supporting Social Intelligence and Interaction, AISB-05, pp 69--72, 2005.

Cited By

View all
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2023)The Effects of Healthcare Robot Empathy Statements and Head Nodding on Trust and Satisfaction: A Video StudyACM Transactions on Human-Robot Interaction10.1145/354953412:1(1-21)Online publication date: 15-Feb-2023
  • (2023)Large language models for human–robot interaction: A reviewBiomimetic Intelligence and Robotics10.1016/j.birob.2023.1001313:4(100131)Online publication date: Dec-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HRI '06: Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
March 2006
376 pages
ISBN:1595932941
DOI:10.1145/1121241
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 March 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. collaborative conversation
  2. conversational feedback
  3. human-robot interaction
  4. nod recognition
  5. nodding

Qualifiers

  • Article

Conference

HRI06
HRI06: International Conference on Human Robot Interaction
March 2 - 3, 2006
Utah, Salt Lake City, USA

Acceptance Rates

Overall Acceptance Rate 268 of 1,124 submissions, 24%

Upcoming Conference

HRI '25
ACM/IEEE International Conference on Human-Robot Interaction
March 4 - 6, 2025
Melbourne , VIC , Australia

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)27
  • Downloads (Last 6 weeks)3
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2023)The Effects of Healthcare Robot Empathy Statements and Head Nodding on Trust and Satisfaction: A Video StudyACM Transactions on Human-Robot Interaction10.1145/354953412:1(1-21)Online publication date: 15-Feb-2023
  • (2023)Large language models for human–robot interaction: A reviewBiomimetic Intelligence and Robotics10.1016/j.birob.2023.1001313:4(100131)Online publication date: Dec-2023
  • (2022)Understanding Is a ProcessFrontiers in Systems Neuroscience10.3389/fnsys.2022.80028016Online publication date: 31-Mar-2022
  • (2022)Modeling Feedback in Interaction With Conversational Agents—A ReviewFrontiers in Computer Science10.3389/fcomp.2022.7445744Online publication date: 15-Mar-2022
  • (2021)Towards an Engagement-Aware Attentive Artificial Listener for Multi-Party InteractionsFrontiers in Robotics and AI10.3389/frobt.2021.5559138Online publication date: 1-Jul-2021
  • (2021)My Bad! Repairing Intelligent Voice Assistant Errors Improves InteractionProceedings of the ACM on Human-Computer Interaction10.1145/34491015:CSCW1(1-24)Online publication date: 22-Apr-2021
  • (2021)Development of Duplex Eye Contact Framework for Human-Robot Inter CommunicationIEEE Access10.1109/ACCESS.2021.30711299(54435-54456)Online publication date: 2021
  • (2021)How does Modality Matter? Investigating the Synthesis and Effects of Multi-modal Robot Behavior on Social IntelligenceInternational Journal of Social Robotics10.1007/s12369-021-00839-w14:4(893-911)Online publication date: 23-Nov-2021
  • (2020)Engagement in Human-Agent Interaction: An OverviewFrontiers in Robotics and AI10.3389/frobt.2020.000927Online publication date: 4-Aug-2020
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media