skip to main content
10.1145/1027933.1027959acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

Identifying the addressee in human-human-robot interactions based on head pose and speech

Published: 13 October 2004 Publication History

Abstract

In this work we investigate the power of acoustic and visual cues, and their combination, to identify the addressee in a human-human-robot interaction. Based on eighteen audio-visual recordings of two human beings and a (simulated) robot we discriminate the interaction of the two humans from the interaction of one human with the robot. The paper compares the result of three approaches. The first approach uses purely acoustic cues to find the addressees. Low level, feature based cues as well as higher-level cues are examined. In the second approach we test whether the human's head pose is a suitable cue. Our results show that visually estimated head pose is a more reliable cue for the identification of the addressee in the human-human-robot interaction. In the third approach we combine the acoustic and visual cues which results in significant improvements.

References

[1]
Special Issue on Human-Friendly Robots, volume 16. Journal of the Robotics Society of Japan, 1998.]]
[2]
Proceedings of the Third IEEE International Conference on Humanoid Robots - Humanoids 2003. IEEE, Karlsruhe, Germany, 2003.]]
[3]
M. Argyle. Social Interaction. Methuen, London, 1969.]]
[4]
I. Bakx, K. van Turnhout, and J. Terken. Facial orientation during multi-party interaction with information kiosks. In Proceedings of the Interact 2003, Zurich, Switzerland, 2003.]]
[5]
J.W. Tankard. Effects of eye position on person perception. Perc. Mot. Skills, (31):883--93, 1970.]]
[6]
M. Katzenmaier. Determining the adressee in spoken human robot interaction, studienarbeit. Technical report, Fakultat fur Informatik, Universitat Karlsruhe (TH), 2003.]]
[7]
C. L. Kleinke, A. A. Bustos, F. B. Meeker, and R. A. Staneski. Effects of self-attributed and other-attributed gaze in interpersonal evaluations between males and females. Journal of experimental social Psychology, (9):154--63, 1973.]]
[8]
P. P. Maglio, T. Matlock, C. S. Campbell, S. Zhai, and B. A. Smith. Gaze and speech in attentive user interfaces. In Proceedings of the International Conference on Multimodal Interfaces, volume 1948 of LNCS. Springer, 2000.]]
[9]
J. Ruusuvuori. Looking means listening: coordinating displays of engagement in doctor-patient interaction. Social Science & Medicine, 52:1093--1108, 2001.]]
[10]
R. Stiefelhagen. Tracking focus of attention in meetings. In International Conference on Multimodal Interfaces, pages 273--280, Pittsburgh, PA, October 2002. IEEE.]]
[11]
R. Stiefelhagen and J. Zhu. Head orientation and gaze direction in meetings. In Conference on Human Factors in Computing Systems (CHI2002), Minneapolis, April 2002.]]
[12]
R. Vertegaal, R. Slagter, G. van der Veer, and A. Nijholt. Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In SIGCHI'01, Seattle, March 2001. ACM.]]

Cited By

View all
  • (2023)Multi-modal modeling for device-directed speech detection using acoustic and linguistic cuesAcoustical Science and Technology10.1250/ast.44.4044:1(40-43)Online publication date: 1-Jan-2023
  • (2023)To Whom are You Talking? A Deep Learning Model to Endow Social Robots with Addressee Estimation Skills2023 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN54540.2023.10191452(1-10)Online publication date: 18-Jun-2023
  • (2022)Cross-Modal Repair: Gaze and Speech Interaction for List AdvancementProceedings of the 4th Conference on Conversational User Interfaces10.1145/3543829.3543833(1-11)Online publication date: 26-Jul-2022
  • Show More Cited By

Index Terms

  1. Identifying the addressee in human-human-robot interactions based on head pose and speech

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '04: Proceedings of the 6th international conference on Multimodal interfaces
    October 2004
    368 pages
    ISBN:1581139950
    DOI:10.1145/1027933
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 October 2004

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. attentive interfaces
    2. focus of attention
    3. head pose estimation
    4. human-robot interaction
    5. multimodal interfaces
    6. speech recognition

    Qualifiers

    • Article

    Conference

    ICMI04
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 10 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Multi-modal modeling for device-directed speech detection using acoustic and linguistic cuesAcoustical Science and Technology10.1250/ast.44.4044:1(40-43)Online publication date: 1-Jan-2023
    • (2023)To Whom are You Talking? A Deep Learning Model to Endow Social Robots with Addressee Estimation Skills2023 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN54540.2023.10191452(1-10)Online publication date: 18-Jun-2023
    • (2022)Cross-Modal Repair: Gaze and Speech Interaction for List AdvancementProceedings of the 4th Conference on Conversational User Interfaces10.1145/3543829.3543833(1-11)Online publication date: 26-Jul-2022
    • (2020)MGpiProceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3398761.3398900(1196-1205)Online publication date: 5-May-2020
    • (2020)Generation of Head Movements of a Robot Using Multimodal Features of Peer Participants in Group Discussion ConversationMultimodal Technologies and Interaction10.3390/mti40200154:2(15)Online publication date: 29-Apr-2020
    • (2020)A Novel Intelligent Robot for Epidemic Identification and Prevention2020 IEEE 16th International Conference on Control & Automation (ICCA)10.1109/ICCA51439.2020.9264540(183-186)Online publication date: 9-Oct-2020
    • (2020)The Role of Gaze as a Deictic Cue in Human Robot InteractionAugmented Cognition. Human Cognition and Behavior10.1007/978-3-030-50439-7_32(466-478)Online publication date: 10-Jul-2020
    • (2019)Designing with GazeProceedings of the ACM on Human-Computer Interaction10.1145/33592783:CSCW(1-26)Online publication date: 7-Nov-2019
    • (2019)Patterns of gaze in speech agent interactionProceedings of the 1st International Conference on Conversational User Interfaces10.1145/3342775.3342791(1-10)Online publication date: 22-Aug-2019
    • (2019)Multimodal conversational interaction with robotsThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233799(77-104)Online publication date: 1-Jul-2019
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media