skip to main content
10.1145/1647314.1647336acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Detecting user engagement with a robot companion using task and social interaction-based features

Published: 02 November 2009 Publication History

Abstract

Affect sensitivity is of the utmost importance for a robot companion to be able to display socially intelligent behaviour, a key requirement for sustaining long-term interactions with humans. This paper explores a naturalistic scenario in which children play chess with the iCat, a robot companion. A person-independent, Bayesian approach to detect the user's engagement with the iCat robot is presented. Our framework models both causes and effects of engagement: features related to the user's non-verbal behaviour, the task and the companion's affective reactions are identified to predict the children's level of engagement. An experiment was carried out to train and validate our model. Results show that our approach based on multimodal integration of task and social interaction-based features outperforms those based solely on non-verbal behaviour or contextual information (94.79 % vs. 93.75 % and 78.13 %).

References

[1]
C. Bartneck, J. Reichenbach, and A. Breemen. In your face, robot! The infuence of a character's embodiment on how users perceive its emotional expressions. In Proceedings of the Design and Emotion Conference 2004, pages 32--51, Ankara, Turkey, 2004.
[2]
C. Breazeal. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, 59(1--2):119--155, July 2003.
[3]
G. Castellano, R. Aylett, A. Paiva, and P. W. McOwan. Affect recognition for interactive companions. In Workshop on Affective Interaction in Natural Environments (AFFINE), ACM International Conference on Multimodal Interfaces (ICMI'08), Chania, Crete, Greece, 2008.
[4]
G. Castellano, I. Leite, A. Pereira, C. Martinho, A. Paiva, and P. W. McOwan. It's all in the game: Towards an affect sensitive and context aware game companion. In International Conference on Affective Computing and Intelligent Interaction, Amsterdam, 2009.
[5]
G. Castellano and P. W. McOwan. Analysis of affective cues in human--robot interaction: A multi-level approach. In 10th International Workshop on Image Analysis for Multimedia Interactive Services, pages 258--261, London, 2009.
[6]
C. Conati and H. Maclaren. Empirically building and evaluating a probabilistic model of user affect. User Modeling and User-Adapted Interaction, January 2009.
[7]
K. Dautenhahn. Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480):679--704, 2007.
[8]
A. Kapoor, W. Burleson, and R. W. Picard. Automatic prediction of frustration. International Journal of Human-Computer Studies, 65(8):724--736, 2007.
[9]
A. Kapoor and R. W. Picard. Multimodal affect recognition in learning environments. In ACM International Conference on Multimedia, pages 677--682, 2005.
[10]
M. Kipp. Spatiotemporal coding in ANVIL. In E. L. R. A. (ELRA), editor, Proceedings of the Sixth International Language Resources and Evaluation (LREC'08), Marrakech, Morocco, May 2008.
[11]
I. Leite, A. Pereira, C. Martinho, and A. Paiva. Are emotional robots more fun to play with? In 17th IEEE International Symposium on Robot and Human Interactive Communication (RO--MAN 2008), pages 77--82, August 2008.
[12]
L. Malta, C. Miyajima, and K. Takeda. Multimodal estimation of a driver's affective state. In Workshop on Affective Interaction in Natural Environments (AFFINE), ACM International Conference on Multimodal Interfaces (ICMI'08), Chania, Crete, Greece, 2008.
[13]
C. Martinho and A. Paiva. Using anticipation to create believable behaviour. In American Association for Artificial Intelligence Technical Conference, pages 1--6, Boston, July 2006.
[14]
L.-P. Morency, I. de Kok, and J. Gratch. Context-based recognition during human interactions: Automatic feature selection and encoding dictionary. In ACM International Conference on Multimodal Interfaces (ICMI'08), pages 181--188, Chania, Crete, Greece, 2008.
[15]
A. Ortony, G. L. Clore, and A. Collins. The Cognitive Structure of Emotions. Cambridge University Press, July 1988.
[16]
C. Peters, S. Asteriadis, K. Karpouzis, and E. de Sevin. Towards a real-time gaze-based shared attention for a virtual agent. In Workshop on Affective Interaction in Natural Environments (AFFINE), ACM International Conference on Multimodal Interfaces (ICMI'08), Chania, Crete, Greece, 2008.
[17]
I. Poggi. Mind, hands, face and body. A goal and belief view of multimodal communication. Weidler, Berlin, 2007.
[18]
S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education, 2003.
[19]
A. van Breemen, X. Yan, and B. Meerbeek. iCat: An animated user-interface robot with personality. In AAMAS '05: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 143--144, New York, NY, USA, 2005. ACM.
[20]
Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39--58, January 2009.

Cited By

View all
  • (2025)Advancing Emotionally Aware Child–Robot Interaction with Biophysical Data and Insight-Driven Affective ComputingSensors10.3390/s2504116125:4(1161)Online publication date: 14-Feb-2025
  • (2025)Multimodal Engagement Prediction in Human-Robot Interaction Using Transformer Neural NetworksMultiMedia Modeling10.1007/978-981-96-2074-6_1(3-17)Online publication date: 1-Jan-2025
  • (2024)Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group DiscussionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685721(556-564)Online publication date: 4-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI-MLMI '09: Proceedings of the 2009 international conference on Multimodal interfaces
November 2009
374 pages
ISBN:9781605587721
DOI:10.1145/1647314
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 November 2009

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. affect recognition
  2. contextual information
  3. human-robot interaction
  4. non-verbal expressive behaviour

Qualifiers

  • Poster

Conference

ICMI-MLMI '09
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)98
  • Downloads (Last 6 weeks)4
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Advancing Emotionally Aware Child–Robot Interaction with Biophysical Data and Insight-Driven Affective ComputingSensors10.3390/s2504116125:4(1161)Online publication date: 14-Feb-2025
  • (2025)Multimodal Engagement Prediction in Human-Robot Interaction Using Transformer Neural NetworksMultiMedia Modeling10.1007/978-981-96-2074-6_1(3-17)Online publication date: 1-Jan-2025
  • (2024)Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group DiscussionsProceedings of the 26th International Conference on Multimodal Interaction10.1145/3678957.3685721(556-564)Online publication date: 4-Nov-2024
  • (2024)The Nose Knows: Using Thermal Imaging to Approximate Children's Engagement with RobotsCompanion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3610978.3640726(669-673)Online publication date: 11-Mar-2024
  • (2024)Automatic Context-Aware Inference of Engagement in HMI: A SurveyIEEE Transactions on Affective Computing10.1109/TAFFC.2023.327870715:2(445-464)Online publication date: Apr-2024
  • (2024)From the Definition to the Automatic Assessment of Engagement in Human–Robot Interaction: A Systematic ReviewInternational Journal of Social Robotics10.1007/s12369-024-01146-w16:7(1641-1663)Online publication date: 4-Jun-2024
  • (2023)A multimodal approach for modeling engagement in conversationFrontiers in Computer Science10.3389/fcomp.2023.10623425Online publication date: 2-Mar-2023
  • (2023)"Feeling Unseen"Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction10.1145/3568294.3580110(378-383)Online publication date: 13-Mar-2023
  • (2022)Audience Engagement Prediction in Guided Tours through Multimodal FeaturesProceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 202110.4000/books.aaccademia.11109(280-286)Online publication date: 20-Oct-2022
  • (2022)Adapting conversational strategies in information-giving human-agent interactionFrontiers in Artificial Intelligence10.3389/frai.2022.10293405Online publication date: 25-Oct-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media