skip to main content
10.5555/1734454.1734559acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

Pointing to space: modeling of deictic interaction referring to regions

Published: 02 March 2010 Publication History

Abstract

In daily conversation, we sometimes observe a deictic interaction scene that refers to a region in a space, such as saying "please put it over there" with pointing. How can such an interaction be possible with a robot? Is it enough to simulate people's behaviors, such as utterance and pointing? Instead, we highlight the importance of simulating human cognition. In the first part of our study, we empirically demonstrate the importance of simulating human cognition of regions when a robot engages in a deictic interaction by referring to a region in a space. The experiments indicate that a robot with simulated cognition of regions improves efficiency of its deictic interaction. In the second part, we present a method for a robot to computationally simulate cognition of regions.

References

[1]
H. Kuzuoka, et al., GestureMan: a mobile robot that embodies a remote instructor's actions, ACM Conf. on Computer-supported cooperative work (CSCW2000), 2000
[2]
B. Scassellati, Investigating Models of Social Development Using a Humanoid Robot. Biorobotics. MIT Press, 2000
[3]
H. Kozima, and E. Vatikiotis-Bateson, Communicative criteria for processing time/space-varying information, IEEE Int. Workshop on Robot and Human Communication, 2001.
[4]
C. Breazeal, et al. Effects of nonverbal communication on efficiency and robustness in human-robot teamwork, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS2005), pp. 383--388, 2005.
[5]
Y. Nagai, Learning to Comprehend Deictic Gestures in Robots and Human Infants, IEEE Int. Workshop on Robot and Human Interactive Communication (RO-MAN'05), pp. 217--222, August 2005.
[6]
J. G. Trafton, et al., Enabling Effective Human-Robot Interaction Using Perspective-Taking in Robots. IEEE Transactions on Systems, Man and Cybernetics. Part A, 35, 4, pp. 460-- 470, 2005
[7]
A. G. Brooks and C. Breazeal, Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground, ACM/IEEE Conf. on Human-Robot Interaction (HRI2006), pp. 297--304, 2006.
[8]
O. Sugiyama, T. Kanda, M. Imai, H. Ishiguro, and N. Hagita, Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model, Connection science, 18(4), pp. 379--402, 2006.
[9]
E. A Topp and H. I. Christensen, Topological Modelling for Human Augmented Mapping, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS2006), pp.2257--2263, 2006.
[10]
Ó. M. Mozosa, et al., Supervised semantic labeling of places using information extracted from sensor data, Robotics and Autonomous Systems, 55(5), pp 391--402, 2007.
[11]
N. Otero, et al., Human to Robot Demonstrations of Routine Home Tasks: Exploring the Role of the Robot's Feedback, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2008), pp. 177--184, 2008.
[12]
J. Peltason, F. Siepmann, T. Spexard, B. Wrede, M. Hanheide, and E. Topp, Mixed-initiative in human augemented mapping, IEEE Int. Conf. on robotics and automation (ICRA2009), pp. 2146--2153, 2009.
[13]
K. Koile, et al., Activity Zones for Context-Aware Computing, In Proc. Ubicomp 2003, pp. 90--106, 2003.
[14]
R. Aipperspach, et al., A Quantitative Method for Revealing and Comparing Places in the Home, In Proc. Ubicomp 2006, pp. 1--18, 2006.
[15]
P. Nurmi and J. Koolwaaij, Identifying meaningful locations, In Proc. Mobiquitous 2006, pp. 1--8, 2006.
[16]
T. Kanda, et al., Who will be the customer?: A social robot that anticipates people's behavior from their trajectories, Int. Conf. on Ubiquitous Computing (UbiComp 2008), pp.380--389, 2008.
[17]
D. Montello, et al., Where's downtown?: behavioral methods for determining referents of vague spatial queries, Spatial cognition and computation, 3 (2&3), pp. 185--204, 2003.
[18]
L. Talmy, The representation of spatial structure in spoken and signed language, K. Emmorey eds, Perspectives on classifier constructions in sign language, pp.169--195, Lawrence Erlbaum, 2003.
[19]
B. Kryk-Kastovsky, The linguistic, cognitive and cultural variables of the conceptualization of space, In Rene Dirven, Martin Putz (eds) The Construal of Space in Language and Thought, 1996.
[20]
S. Imai, Spatial deixis, VDM Verlag, 2009.
[21]
R. M. Krauss, Why do we gesture when we speak? Current Directions in Psychological Science. 7, 54--59. 1998.
[22]
D. McNeill, Psycholinguistics: a new approach, 1987.
[23]
M. Alibali, Gestures in spatial cognition: expressing, communicating, and thinking about spatial information, Spatial cogtniion and computation, 5(4), pp. 307--331, 2005.
[24]
Y. Okuno, et al., Providing Route Directions: Design of Robot's Utterance, Gesture, and Timing, 4th ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2009), pp.53--60, 2009.
[25]
A. Kendon, Gesture: Visible Action as Utterance, Cambridge University Press, 2004
[26]
A. Kranstedt, A. Lücking, T. Pfeiffer, H. Rieser, and I. Wachsmuth, Deictic Object Reference in Task-Oriented Dialogue, In G. Rickheit, I. Wachsmuth (eds) Situated Communication, Mouton De Gruyte, 2006.
[27]
C. L. Sidner, C. D. Kidd, C. Lee, and N. Lesh, Where to look: a study of human-robot engagement. Intelligent User Interfaces, pp. 78--84, 2004.
[28]
B. Mutlu, J. K. Hodgins, and J. Forlizzi, A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior. IEEE Int. Conf. Humanoid robots (Humanoids'06), pp. 518--523, 2006.
[29]
B. Mutlu, et al., Footing In Human-Robot Conversations: How Robots Might Shape Participant Roles Using Gaze Cues, ACM/IEEE Int. Conf. on Human-Robot Interaction(HRI2009), pp.61--68, 2009.
[30]
D. Marr, Vision: a computational investigation into the human representation and processing of visual information, 1982.
[31]
S. Werner, et al., Spatial cognition: the role of landmark, route, and survey knowledge in human and robot navigation, Proceedings der 27. Jahrestagung der Gesellschaft für Informatik, Informatik, pp. 41--50, 1997.
[32]
C. Nothegger, et al., Selection of salient features for route directions, spatial cognition and computation, 4(2), pp.113--136, 2004.
[33]
A. Turner and A. Penn, Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment, Environment and Planning B: Planning and Design, 29, pp.473--490, 2002.
[34]
R. Moratz and T. Tenbrink, Spatial reference in linguistic human-robot interaction: iterative, empirically supported development of a model of projective relations, Spatial cognition and computation, 6(1), pp. 63--106, 2006.
[35]
R. Moratz, M. Ragni, Qualitative spatial reasoning about relative point position, Journal of Visual Languages & Computing, 19 (1), pp. 75--98, 2008.
[36]
M. Johnson, Y. Demiris, Perceptual Perspective Taking and Action Recognition, Advanced Robotic Systems, 2(4), pp. 301--308, 2005.
[37]
M. Berlin, J. Gray, A. L. Thomaz, C. Breazeal, Perspective Taking: An Organizing Principle for Learning in Human-Robot Interaction, In Proceedings of AAAI, pp.1444--1450, 2006.
[38]
W. G. Kennedy, et al., Spatial Representation and Reasoning for Human-Robot Collaboration, In Proc. of AAAI, pp.1554--1559, 2007.
[39]
J. G. Trafton, et al., Integrating Vision and Audition within a Cognitive Architecture to Track Conversations, ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI2008), pp.201--208, 2008.
[40]
OpenCV 1.0, http://opencv.willowgarage.com/wiki/
[41]
J. Canny, A Computational Approach to Edge Detection, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, Nov. 1986.
[42]
D. Wilkins, Why pointing with the index finger is not a universal, In S. Kita eds, Pointing: Where language, culture, and cognition meet, Lawrence Erlbaum Assoc Inc, 2003.
[43]
J. G. Trafton, et al., Children and Robots Learning to Play Hide and Seek, ACM/IEEE Conf. on Human-Robot Interaction (HRI2006), pp.242--249, 2006.
[44]
O. Sugiyama, et al., Natural Deictic Communication with Humanoid Robots, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS2007), pp. 1441--1448, 2007.

Cited By

View all
  • (2014)Robot deicticsProceedings of the 2014 ACM/IEEE international conference on Human-robot interaction10.1145/2559636.2559657(342-349)Online publication date: 3-Mar-2014
  • (2014)Robot gestures make difficult tasks easierProceedings of the SIGCHI Conference on Human Factors in Computing Systems10.1145/2556288.2557274(1459-1466)Online publication date: 26-Apr-2014
  • (2013)It's not polite to pointProceedings of the 8th ACM/IEEE international conference on Human-robot interaction10.5555/2447556.2447665(267-274)Online publication date: 3-Mar-2013
  • Show More Cited By

Index Terms

  1. Pointing to space: modeling of deictic interaction referring to regions

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HRI '10: Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
    March 2010
    400 pages
    ISBN:9781424448937

    Sponsors

    Publisher

    IEEE Press

    Publication History

    Published: 02 March 2010

    Check for updates

    Author Tags

    1. cognition of regions
    2. communicating about regions
    3. pointing gesture
    4. social robots
    5. spatial deixis

    Qualifiers

    • Research-article

    Conference

    HRI 10
    Sponsor:

    Acceptance Rates

    HRI '10 Paper Acceptance Rate 26 of 124 submissions, 21%;
    Overall Acceptance Rate 268 of 1,124 submissions, 24%

    Upcoming Conference

    HRI '25
    ACM/IEEE International Conference on Human-Robot Interaction
    March 4 - 6, 2025
    Melbourne , VIC , Australia

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)3
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 19 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2014)Robot deicticsProceedings of the 2014 ACM/IEEE international conference on Human-robot interaction10.1145/2559636.2559657(342-349)Online publication date: 3-Mar-2014
    • (2014)Robot gestures make difficult tasks easierProceedings of the SIGCHI Conference on Human Factors in Computing Systems10.1145/2556288.2557274(1459-1466)Online publication date: 26-Apr-2014
    • (2013)It's not polite to pointProceedings of the 8th ACM/IEEE international conference on Human-robot interaction10.5555/2447556.2447665(267-274)Online publication date: 3-Mar-2013
    • (2013)Understanding suitable locations for waitingProceedings of the 8th ACM/IEEE international conference on Human-robot interaction10.5555/2447556.2447566(57-64)Online publication date: 3-Mar-2013
    • (2012)Wizard of Oz studies in HRIJournal of Human-Robot Interaction10.5898/JHRI.1.1.Riek1:1(119-136)Online publication date: 28-Jul-2012
    • (2011)Modeling environments from a route perspectiveProceedings of the 6th international conference on Human-robot interaction10.1145/1957656.1957815(441-448)Online publication date: 6-Mar-2011
    • (2010)Focusing computational visual attention in multi-modal human-robot interactionInternational Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction10.1145/1891903.1891912(1-8)Online publication date: 8-Nov-2010

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media