ABSTRACT
We present a novel interface for human-robot interaction that enables a human to intuitively and unambiguously select a 3D location in the world and communicate it to a mobile robot. The human points at a location of interest and illuminates it (``clicks it'') with an unaltered, off-the-shelf, green laser pointer. The robot detects the resulting laser spot with an omnidirectional, catadioptric camera with a narrow-band green filter. After detection, the robot moves its stereo pan/tilt camera to look at this location and estimates the location's 3D position with respect to the robot's frame of reference.
Unlike previous approaches, this interface for gesture-based pointing requires no instrumentation of the environment, makes use of a non-instrumented everyday pointing device, has low spatial error out to 3 meters, is fully mobile, and is robust enough for use in real-world applications.
We demonstrate that this human-robot interface enables a person to designate a wide variety of everyday objects placed throughout a room. In 99.4% of these tests, the robot successfully looked at the designated object and estimated its 3D position with low average error. We also show that this interface can support object acquisition by a mobile manipulator. For this application, the user selects an object to be picked up from the floor by ``clicking'' on it with the laser pointer interface. In 90% of these trials, the robot successfully moved to the designated object and picked it up off of the floor.
- Joint laser designation procedures: Training and doctrine command procedures pamphlet 34--3, December 1985.Google Scholar
- Helping hands: Monkey helpers for the disabled inc. http://www.helpinghandsmonkeys.org/, dec 2006.Google Scholar
- S. Baluja and D. Pomerleau. Non-intrusive gaze tracking using artificial neural networks. Technical report, Carnegie Mellon University, Pittsburgh, PA, USA, 1994. Google ScholarDigital Library
- A. G. Brooks and C. Breazeal. Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground. 2006.Google Scholar
- A. M. Dollar and R. D. Howe. Towards grasping in unstructured environments: Grasper compliance and configuraton optimization. Advanced Robotics, 19(5):523--543, 2005.Google ScholarCross Ref
- W. K. English, D. C. Engelbart, and M. L. Berman. Display-selection techniques for text manipulation. IEEE Transactions on Human Factors in Electronics, 8(1), March 1967.Google ScholarCross Ref
- F. Guterl. Design case history: Apple's macintosh. IEEE Spectrum, 1984.Google Scholar
- A. Huang. Finding a laser dot. In Online progress log for the Ladypack project, Cambridge, May 2006.Google Scholar
- Z. Kazi and R. Foulds. Knowledge driven planning and multimodal control of a telerobot. Robotica, 16:509--516, 1998. Google ScholarDigital Library
- C. Lundberg, C. Barck-Holst, J. Folkeson, and H. Christensen. Pda interface for a field robot. Intelligent Robots and Systems, 2003. (IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, 2003.Google ScholarCross Ref
- L. Natale and E. Torres-Jara. A sensitive approach to grasping. September 2006.Google Scholar
- S. K. Nayar. Catadioptric omnidirectional camera. Computer Vision and Pattern Recognition, 1997. Google ScholarDigital Library
- K. Nickel and R. Stiefelhagen. Real-time recognition of 3d-pointing gestures for human-machine-interaction. 2003.Google Scholar
- S. N. Patel and G. D. Abowd. A 2-way laser-assisted selection scheme for handhelds in a physical environment. UbiComp 2003: Ubiquitous Computing, 2003.Google Scholar
- D. Perzanowski, A. Schultz, W. Adams, E. Marsh, and M. Bugajska. Building a multimodal human-robot interface. IEEE Intelligent Systems, 16, 2001. Google ScholarDigital Library
- A. Saxena, J. Driemeyer, J. Kearns, C. Osondu, and A. Y. Ng. Learning to grasp novel objects using vision. 2006.Google Scholar
- B. Scassellati. Mechanisms of shared attention for a humanoid robot. "Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium", 1996.Google Scholar
- C. A. Stanger, C. Anglin, W. S. Harwin, and D. P. Romilly. Devices for assisting manipulation: a summary of user task priorities. IEEE Transactions on Rehabilitation Engineering, 2(4):10, December 1994.Google ScholarCross Ref
- S. Teller, J. Chen, and H. Balakrishnan. Pervasive pose-aware applications and infrastructure. IEEE CG&A, pages 14--18, July 2003. Google ScholarDigital Library
- M. Vo and A. Wabel. Multimodal human-computer interaction. In Proc. Int. Symp. on Spoken Dialogue, 1993.Google Scholar
- A. Wilson and H. Pham. Pointing in intelligent environments with the worldcursor. Interact, 2003.Google Scholar
- A. Wilson and S. Shafer. Xwand: Ui for intelligent spaces. Conference on Human Factors in Computing Systems, 2003. Google ScholarDigital Library
- J. Ziegler, K. Nickel, and R. Stiefelhagen. Tracking of the articulated upper body on multi-view stereo image sequences. 2006. Google ScholarDigital Library
Index Terms
- A point-and-click interface for the real world: laser designation of objects for mobile manipulation
Recommendations
Laser pointers and a touch screen: intuitive interfaces for autonomous mobile manipulation for the motor impaired
Assets '08: Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibilityEl-E ("Ellie") is a prototype assistive robot designed to help people with severe motor impairments manipulate everyday objects. When given a 3D location, El-E can autonomously approach the location and pick up a nearby object. Based on interviews of ...
EL-E: an assistive mobile manipulator that autonomously fetches objects from flat surfaces
Assistive mobile robots that autonomously manipulate objects within everyday settings have the potential to improve the lives of the elderly, injured, and disabled. Within this paper, we present the most recent version of the assistive mobile ...
Task-centric optimization of configurations for assistive robots
AbstractRobots can provide assistance to a human by moving objects to locations around the person’s body. With a well-chosen initial configuration, a robot can better reach locations important to an assistive task despite model error, pose uncertainty, ...
Comments