skip to main content
10.1145/1349822.1349861acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue

Authors Info & Claims
Published:12 March 2008Publication History

ABSTRACT

Generating referring expressions is a task that has received a great deal of attention in the natural-language generation community, with an increasing amount of recent effort targeted at the generation of multimodal referring expressions. However, most implemented systems tend to assume very little shared knowledge between the speaker and the hearer, and therefore must generate fully-elaborated linguistic references. Some systems do include a representation of the physical context or the dialogue context; however, other sources of contextual information are not normally used. Also, the generated references normally consist only of language and, possibly, deictic pointing gestures.

When referring to objects in the context of a task-based interaction involving jointly manipulating objects, a much richer notion of context is available, which permits a wider range of referring options. In particular, when conversational partners cooperate on a mutual task in a shared environment, objects can be made accessible simply by manipulating them as part of the task. We demonstrate that such expressions are common in a corpus of human-human dialogues based on constructing virtual objects, and then describe how this type of reference can be incorporated into the output of a humanoid robot that engages in similar joint construction dialogues with a human partner.

References

  1. M. Ariel. The function of accessibility in a theory of grammar. Journal of Pragmatics, 16(5):443--463, 1991. doi:10.1016/0378-2166(91)90136-L.Google ScholarGoogle ScholarCross RefCross Ref
  2. E. G. Bard and M. P. Aylett. Referential form, word duration, and modeling the listener in spoken dialogue. In J. C. Trueswell and M. K. Tanenhaus, editors, Approaches to Studying World-Situated Language Use: Bridging the Language-as-Product and Language-as-Action Traditions. The MIT Press, 2004.Google ScholarGoogle Scholar
  3. A. Belz, A. Gatt, E. Reiter, and J. Viethen, editors. The Attribute Selection for Generation of Referring Expressions Challenge, 2007. http://www.csd.abdn.ac.uk/research/evaluation/.Google ScholarGoogle Scholar
  4. R.-J. Beun and A. Cremers. Object reference in a shared domain of conversation. Pragmatics and Cognition, 6(1--2):121--152, 1998.Google ScholarGoogle Scholar
  5. D. K. Byron. Understanding referring expressions in situated language: Some challenges for real-world agents. In Proceedings of the First International Workshop on Language Understanding and Agents for Real World Interaction, 2003.Google ScholarGoogle Scholar
  6. J. Carletta, C. Nicol, T. Taylor, R. L. Hill, J. P. de Ruiter, and E. G. Bard. Eye-tracking for two-person tasks with manipulation of a virtual world. Behavior Research Methods, under revision.Google ScholarGoogle Scholar
  7. R. Dale and E. Reiter. Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science, 19(2):233--263, 1995. doi:10.1207/s15516709cog1902_3.Google ScholarGoogle ScholarCross RefCross Ref
  8. M. E. Foster, T. By, M. Rickert, and A. Knoll. Human-robot dialogue for joint construction tasks. In ICMI '06: Proceedings of the 8th international conference on Multimodal interfaces, pages 68--71, Ban, Alberta, Canada, November 2006. doi:10.1145/1180995.1181009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. E. Foster and C. Matheson. Representing and using assembly plans in cooperative, task-based human-robot dialogue. 2008. In submission.Google ScholarGoogle Scholar
  10. D. Gergle, C. P. Rosé, and R. E. Kraut. Modeling the impact of shared visual information on collaborative reference. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 1543--1552, 2007. doi:10.1145/1240624.1240858. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. D. Kelleher and G.-J. M. Kruijff. Incremental generation of spatial referring expressions in situated dialog. In Proceedings of the 44th annual meeting of the ACL (COLING-ACL 2006), pages 1041--1048, 2006. doi:10.3115/1220175.1220306. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. A. Kranstedt, A. Lücking, T. Pfeier, H. Rieser, and I. Wachsmuth. Deictic object reference in task-oriented dialogue. In G. Rickheit and I. Wachsmuth, editors, Situated Communication, pages 155--207. Mouton de Gruyter, 2006.Google ScholarGoogle Scholar
  13. A. Kranstedt and I. Wachsmuth. Incremental generation of multimodal deixis referring to objects. In Proceedings of the 10th European Workshop on Natural Language Generation (ENLG-05), pages 75--82, Aberdeen, Scotland, August 2005.Google ScholarGoogle Scholar
  14. F. Landragin, N. Bellalem, and L. Romary. Referring to objects with spoken and haptic modalities. In Proceedings of the Fourth IEEE International Conference on Multimodal Interfaces, pages 99--104, 2002. doi:10.1109/ICMI.2002.1166976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. P. Piwek. Modality choice for generation of referring acts: Pointing versus describing. In Proceedings of the Workshop on Multimodal Output Generation (MOG 2007), Aberdeen, Scotland, 2007.Google ScholarGoogle Scholar
  16. D. C. Richardson, R. Dale, and N. Z. Kirkham. The art of conversation is coordination: common ground and the coupling of eye movements during dialogue. Psychological Science, 18(5):407--413, May 2007. doi:10.1111/j.1467-9280.2007.01914.x.Google ScholarGoogle ScholarCross RefCross Ref
  17. M. Rickert, M. E. Foster, M. Giuliani, T. By, G. Panin, and A. Knoll. Integrating language, vision and action for human robot dialog systems. In Proceedings of HCI International 2007, Beijing, China, July 2007. doi:10.1007/978-3-540-73281-5_108. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. J. N. van Breemen. iCat: Experimenting with animabotics. In Proceedings of the AISB 2005 Creative Robotics Symposium, 2005.Google ScholarGoogle Scholar
  19. K. van Deemter, I. van der Sluis, and A. Gatt. Building a semantically transparent corpus for the generation of referring expressions. In Proceedings of the 4th International Conference on Natural Language Generation (INLG), Sydney, Australia, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. I. F. van der Sluis. Multimodal Reference: Studies in Automatic Generation of Multimodal Referring Expressions. PhD thesis, University of Tilburg, 2005.Google ScholarGoogle Scholar

Index Terms

  1. The roles of haptic-ostensive referring expressions in cooperative, task-based human-robot dialogue

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        HRI '08: Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
        March 2008
        402 pages
        ISBN:9781605580173
        DOI:10.1145/1349822

        Copyright © 2008 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 12 March 2008

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate242of1,000submissions,24%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader