ABSTRACT
In this paper, we propose a telepresence robot that exaggerates non-verbal cues for taking turns in multi- party teleconferences. In multi-party teleconferences, it is more difficult that the remote participants to take their turns than face-to-face. Therefore, it is assumed that the remote participants' turns are tended to decrease. It is said that addressee tends to be next speaker. Therefore, becoming addressee is previous step to become speaker and take turns. In order to make remote participant become addressee, proposed system recognized remote participant's non-verbal cues such as attention directions, nod motions and back- channels. Then, the system exaggerates the cues and expresses them as the telepresence robot's motions.
- Marjorie F. Vargas: Louder Than Words: An Introduction to Nonverbal Communication, Iowa State Press (1986)Google Scholar
- Goffman, E. Replies and Responses, Language in Society, 5 (1982), 257--313Google Scholar
- Clark, H. H. Using Language, Cambridge University Press (1996)Google Scholar
- S. O. Adalgeirsson, C. Breazeal: MeBot: A Robotic Platform for Socially Embodied Telepresence, HRI2010 (2010), 15--22 Google ScholarDigital Library
- T. Tojo, Y. Matsusaka, T. Ishii, and T. Kobayashi, "A con- versational robot utilizing facial and body expressions," in 2000 IEEE International Conference on Systems, Man, and Cybernetics, 2 (2000)Google Scholar
- Hidekazu Tamaki, Suguru Higashino, Minoru Kobayashi, Masayuki Ihara: Reducing Speech Contention in Web Conferences, 2011 IEEE/IPSJ International Symposium on Applications and the Internet, (2011), 75--81 Google ScholarDigital Library
Index Terms
- Telepresence robot that exaggerates non-verbal cues for taking turns in multi-party teleconferences
Recommendations
Comparison of Human-Human and Human-Robot Turn-Taking Behaviour in Multiparty Situated Interaction
UM3I '14: Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal InteractionsIn this paper, we present an experiment where two human subjects are given a team-building task to solve together with a robot. The setting requires that the speakers' attention is partly directed towards objects on the table between them, as well as to ...
Multimodal Turn Analysis and Prediction for Multi-party Conversations
ICMI '23: Proceedings of the 25th International Conference on Multimodal InteractionThis paper presents a computational study to analyze and predict turns (i.e., turn-taking and turn-keeping) in multiparty conversations. Specifically, we use a high-fidelity hybrid data acquisition system to capture a large-scale set of multi-modal ...
Patterns of synchronization of non-verbal cues and speech in ECAs: towards a more "natural" conversational agent
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issuesThis paper presents an analysis of the verbal and non-verbal cues of Conversational Agents, with a special focus on REA and GRETA, in order to allow further research aimed at correcting some traits of their performance still considered unnatural by ...
Comments