skip to main content
10.1145/1349822.1349854acmconferencesArticle/Chapter ViewAbstractPublication PageshriConference Proceedingsconference-collections
research-article

A point-and-click interface for the real world: laser designation of objects for mobile manipulation

Published:12 March 2008Publication History

ABSTRACT

We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it (``clicks it'') with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location's 3D position with respect to the robot's frame of reference.

Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications.

We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by ``clicking'' on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.

References

  1. Joint laser designation procedures: Training and doctrine command procedures pamphlet 34--3, December 1985.Google ScholarGoogle Scholar
  2. Helping hands: Monkey helpers for the disabled inc. http://www.helpinghandsmonkeys.org/, dec 2006.Google ScholarGoogle Scholar
  3. S. Baluja and D. Pomerleau. Non-intrusive gaze tracking using artificial neural networks. Technical report, Carnegie Mellon University, Pittsburgh, PA, USA, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. G. Brooks and C. Breazeal. Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground. 2006.Google ScholarGoogle Scholar
  5. A. M. Dollar and R. D. Howe. Towards grasping in unstructured environments: Grasper compliance and configuraton optimization. Advanced Robotics, 19(5):523--543, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  6. W. K. English, D. C. Engelbart, and M. L. Berman. Display-selection techniques for text manipulation. IEEE Transactions on Human Factors in Electronics, 8(1), March 1967.Google ScholarGoogle ScholarCross RefCross Ref
  7. F. Guterl. Design case history: Apple's macintosh. IEEE Spectrum, 1984.Google ScholarGoogle Scholar
  8. A. Huang. Finding a laser dot. In Online progress log for the Ladypack project, Cambridge, May 2006.Google ScholarGoogle Scholar
  9. Z. Kazi and R. Foulds. Knowledge driven planning and multimodal control of a telerobot. Robotica, 16:509--516, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. C. Lundberg, C. Barck-Holst, J. Folkeson, and H. Christensen. Pda interface for a field robot. Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  11. L. Natale and E. Torres-Jara. A sensitive approach to grasping. September 2006.Google ScholarGoogle Scholar
  12. S. K. Nayar. Catadioptric omnidirectional camera. Computer Vision and Pattern Recognition, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. K. Nickel and R. Stiefelhagen. Real-time recognition of 3d-pointing gestures for human-machine-interaction. 2003.Google ScholarGoogle Scholar
  14. S. N. Patel and G. D. Abowd. A 2-way laser-assisted selection scheme for handhelds in a physical environment. UbiComp 2003: Ubiquitous Computing, 2003.Google ScholarGoogle Scholar
  15. D. Perzanowski, A. Schultz, W. Adams, E. Marsh, and M. Bugajska. Building a multimodal human-robot interface. IEEE Intelligent Systems, 16, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. A. Saxena, J. Driemeyer, J. Kearns, C. Osondu, and A. Y. Ng. Learning to grasp novel objects using vision. 2006.Google ScholarGoogle Scholar
  17. B. Scassellati. Mechanisms of shared attention for a humanoid robot. "Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium", 1996.Google ScholarGoogle Scholar
  18. C. A. Stanger, C. Anglin, W. S. Harwin, and D. P. Romilly. Devices for assisting manipulation: a summary of user task priorities. IEEE Transactions on Rehabilitation Engineering, 2(4):10, December 1994.Google ScholarGoogle ScholarCross RefCross Ref
  19. S. Teller, J. Chen, and H. Balakrishnan. Pervasive pose-aware applications and infrastructure. IEEE CG&A, pages 14--18, July 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. M. Vo and A. Wabel. Multimodal human-computer interaction. In Proc. Int. Symp. on Spoken Dialogue, 1993.Google ScholarGoogle Scholar
  21. A. Wilson and H. Pham. Pointing in intelligent environments with the worldcursor. Interact, 2003.Google ScholarGoogle Scholar
  22. A. Wilson and S. Shafer. Xwand: Ui for intelligent spaces. Conference on Human Factors in Computing Systems, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Ziegler, K. Nickel, and R. Stiefelhagen. Tracking of the articulated upper body on multi-view stereo image sequences. 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A point-and-click interface for the real world: laser designation of objects for mobile manipulation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          HRI '08: Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
          March 2008
          402 pages
          ISBN:9781605580173
          DOI:10.1145/1349822

          Copyright © 2008 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 March 2008

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate242of1,000submissions,24%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader