ABSTRACT
Gaze is an important nonverbal feedback signal in multiparty face-to-face conversations. To build a robot that can convey the appropriate attentional behavior in human-robot multiparty conversations, this paper analyzes human attentional behaviors in multiparty conversations, and establishes gaze-transition models for speakers, addressees, and side participants. Further, the model was implemented in a humanoid robot that can control its gaze.
- Vertegaal, R., The GAZE Groupware System: Mediating Joint Attention in Multiparty Communication and Collaboration, In CHI 1999, 1999. Google ScholarDigital Library
- Goffman, E., Forms of Talk, Philadelphia, PA: University of Pennsylvania Press, 1981.Google Scholar
- Clark, H.H., Using Language, Cambridge: Cambridge University Press, 1996.Google Scholar
- Moubayed, S. AI, Beskow, J., Skantze, G., and Björn, G. Furhat: A Back-projected Human-like Robot Head for Multiparty Human-Machine Interaction, Cognitive Behavioural Systems, Lecture Notes in Computer Science Volume 7403, pp 114--130, 2012. Google ScholarDigital Library
- Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., and Hagita, N. Footing In Human-Robot Conversations: How Robots Might Shape Participant Roles Using Gaze Cues, HRI '09 Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, pp 61--68, 2009. Google ScholarDigital Library
Index Terms
- Determining robot gaze according to participation roles in multiparty conversations
Recommendations
Controlling Robot's Gaze according to Participation Roles and Dominance in Multiparty Conversations
HRI'15 Extended Abstracts: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended AbstractsA robot's gaze behaviors are indispensable in allowing the robot to participate in multiparty conversations. To build a robot that can convey appropriate attentional behavior in multiparty human- robot conversations, this study proposes robot head gaze ...
Facilitating multiparty dialog with gaze, gesture, and speech
ICMI-MLMI '10: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal InteractionWe study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation ...
Impact of video editing based on participants' gaze in multiparty conversation
CHI EA '04: CHI '04 Extended Abstracts on Human Factors in Computing SystemsThis paper presents a video cut editing rule based on participants' gaze for establishing video editing rules that can accurately and clearly convey the flow of conversation in multiparty conversations to viewers. Demand is growing to be able to ...
Comments