skip to main content
10.1145/2070481.2070546acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Characterization of coordination in an imitation task: human evaluation and automatically computable cues

Published:14 November 2011Publication History

ABSTRACT

Understanding the ability to coordinate with a partner constitutes a great challenge in social signal processing and social robotics. In this paper, we designed a child-adult imitation task to investigate how automatically computable cues on turn-taking and movements can give insight into high-level perception of coordination. First we collected a human questionnaire to evaluate the perceived coordination of the dyads. Then, we extracted automatically computable cues and information on dialog acts from the video clips. The automatic cues characterized speech and gestural turn-takings and coordinated movements of the dyad. We finally confronted human scores with automatic cues to search which cues could be informative on the perception of coordination during the task. We found that the adult adjusted his behavior according to the child need and that a disruption of the gestural turn-taking rhythm was badly perceived by the judges. We also found, that judges rated negatively the dyads that talked more as speech intervenes when the child had difficulties to imitate. Finally, coherence measures between the partners' movement features seemed more adequate than correlation to characterize their coordination.

References

  1. F. Bernieri, J. Reznick, and R. Rosenthal. Synchrony, pseudo synchrony, and dissynchrony: Measuring the entrainment process in mother-infant interactions. Journal of Personality and Social Psychology, 54(2):243--253, 1988.Google ScholarGoogle ScholarCross RefCross Ref
  2. F. Bernieri and R. Rosenthal. Interpersonal coordination: Behavior matching and interactional synchrony. Fundamentals of nonverbal behavior. Cambridge University Press, 1991.Google ScholarGoogle Scholar
  3. G. R. Bradski. Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal, (Q2), 1998.Google ScholarGoogle Scholar
  4. C. Breazeal and B. Scassellati. A context-dependent attention system for a social robot. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pages 1146--1153, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. L. Breazeal. Sociable machines: expressive social exchange between humans and robots. PhD thesis, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Cappella. Behavioral and judged coordination in adult informal social interactions: vocal and kinesic indicators. Pers. Soc. Psychol., 72:119--131, 1977.Google ScholarGoogle ScholarCross RefCross Ref
  7. A. Delaborde and L. Devillers. Use of nonverbal speech cues in social interaction between human and robot: emotional and interactional markers. In Proceedings of the 3rd international workshop on Affective interaction in natural environments, pages 75--80, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. E. Delaherche and M. Chetouani. Multimodal coordination: exploring relevant features and measures. In Second International Workshop on Social Signal Processing, ACM Multimedia 2010, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. H. Hung and D. Gatica-Perez. Estimating cohesion in small groups using audio-visual nonverbal behavior. IEEE Transactions on Multimedia, 12(6):563--575, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. H. Hung, Y. Huang, G. Friedland, and D. Gatica-Perez. Estimating dominance in multi-party meetings using speaker diarization. IEEE Transactions on Audio, Speech & Language Processing, 19(4):847--860, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. B. Jayagopi and D. Gatica-Perez. Mining group nonverbal conversational patterns using probabilistic topic models. IEEE Transactions on Multimedia, 12(8):790--802, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Kipp. Spatiotemporal coding in anvil. In LREC, 2008.Google ScholarGoogle Scholar
  13. D. Lakens. Movement synchrony and perceived entitativity. Journal of Experimental Social Psychology, 46(5):701 -- 708, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  14. F. Ramseyer and W. Tschacher. Nonverbal synchrony or random coincidence? how to tell the difference. In A. Esposito et al., editors, Development of Multimodal Interfaces: Active Listening and Synchrony, volume 5967, pages 182--196. Springer Berlin / Heidelberg, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M. J. Richardson, K. L. Marsh, R. W. Isenhower, J. R. Goodman, and R. Schmidt. Rocking together: Dynamics of intentional and unintentional interpersonal coordination. Human Movement Science, 26(6):867 -- 891, 2007.Google ScholarGoogle ScholarCross RefCross Ref
  16. M. Rolf, M. Hanheide, and K. Rohlfing. Attention via synchrony : Making use of multimodal cues in social learning. IEEE Trans. Auton. Mental Develop., 1(1):55--67, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. Varni, A. Camurri, P. Coletta, and G. Volpe. Toward a real-time automated measure of empathy and dominance. In CSE (4), pages 843--848, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. G. Varni, M. Mancini, G. Volpe, and A. Camurri. Sync'n'move: social interaction based on music and gesture. Proceedings of the 1st International ICST Conference on User Centric Media, 2009.Google ScholarGoogle Scholar
  19. S. F. Worgan and R. K. Moore. Towards the detection of social dominance in dialogue. Speech Communication, In Press:--, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. B. Wrede, S. Kopp, K. Rohlfing, M. Lohse, and C. Muhl. Appropriate feedback in asymmetric interactions. Journal of Pragmatics, 42(9):2369 -- 2384, 2010. How people talk to Robots and Computers.Google ScholarGoogle ScholarCross RefCross Ref
  21. Z. Yucel, A. A. Salah, C. Mericli, and T. Mericli. Joint visual attention modeling for naturally interacting robotic agents. In ISCIS, pages 242--247, 2009.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Characterization of coordination in an imitation task: human evaluation and automatically computable cues

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ICMI '11: Proceedings of the 13th international conference on multimodal interfaces
      November 2011
      432 pages
      ISBN:9781450306416
      DOI:10.1145/2070481

      Copyright © 2011 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 14 November 2011

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate453of1,080submissions,42%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader