ABSTRACT
Interacting with objects from a distance is not only challenging in the real world but also a common problem in virtual reality (VR). One issue concerns the distinction between attention for exploration and attention for selection -- also known as the Midas-touch problem. Researchers proposed numerous approaches to overcome that challenge using additional devices, gaze input cascaded pointing, and using eye blinks to select the remote object. While techniques such as MAGIC pointing still require additional input for confirming a selection using eye gaze and, thus, forces the user to perform unnatural behavior, there is still no solution enabling a truly natural and unobtrusive device free interaction for selection. In this paper, we propose EyePointing: a technique which combines the MAGIC pointing technique and the referential mid-air pointing gesture to selecting objects in a distance. While the eye gaze is used for referencing the object, the pointing gesture is used as a trigger.
- Ferran Argelaguet, Carlos Andujar, and Ramon Trueba. 2008. Overcoming Eye-hand Visibility Mismatch in 3D Pointing Selection. In Proceedings of the 2008 ACM Symposium on Virtual Reality Software and Technology (VRST '08). ACM, New York, NY, USA. Google ScholarDigital Library
- Richard A. Bolt. 1980. Put-that-there: Voice and Gesture at the Graphics Interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '80). ACM, New York, NY, USA. Google ScholarDigital Library
- Dong-Chan Cho and Whoi-Yul Kim. 2013. Long-Range Gaze Tracking System for Large Movements. IEEE Transactions on Biomedical Engineering (Dec 2013).Google ScholarCross Ref
- Andrea Corradini and Philip R. Cohen. 2002. Multimodal speech-gesture interface for handfree painting on a virtual paper using partial recurrent neural networks as gesture recognizer.Google Scholar
- Connor Dickie, Jamie Hart, Roel Vertegaal, and Alex Eiser. 2006. Look-Point: An Evaluation of Eye Input for Hands-free Switching of Input Devices Between Multiple Computers. In Proceedings of the 18th Australia Conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments (OZCHI '06). ACM, New York, NY, USA. Google ScholarDigital Library
- Heiko Drewes and Albrecht Schmidt. 2009. The MAGIC Touch: Combining MAGIC-Pointing with a Touch-Sensitive Mouse. In Human-Computer Interaction (INTERACT '09). Springer, Berlin, Heidelberg. Google ScholarDigital Library
- Andreas Fender, Philipp Herholz, Marc Alexa, and Jörg Müller. 2018. OptiSpace: Automated Placement of Interactive 3D Projection Mapping Content. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 269. Google ScholarDigital Library
- Juergen Gall, Carsten Stoll, Edilson de Aguiar, Christian Theobalt, Bodo Rosenhahn, and Hans-Peter Seidel. 2009. Motion capture using joint skeleton tracking and surface estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR '09).Google ScholarCross Ref
- Craig Hennessey and Jacob Fiset. 2012. Long Range Eye Tracking: Bringing Eye Tracking into the Living Room. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '12). ACM, New York, NY, USA. Google ScholarDigital Library
- Robert JK Jacob. 1995. Eye tracking in advanced interface design. Virtual environments and advanced interface design (1995). Google ScholarDigital Library
- Mohamed Khamis, Axel Hoesl, Alexander Klimczak, Martin Reiss, Florian Alt, and Andreas Bulling. 2017. EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (UIST '17). ACM, New York, NY, USA. Google ScholarDigital Library
- Alfred Kranstedt, Andy Lücking, Thies Pfeiffer, Hannes Rieser, and Marc Staudacher. 2006. Measuring and Reconstructing Pointing in Visual Contexts. In Proceedings of the 10th Workshop on the Semantics and Pragmatics of Dialogue (brandial '06). Universitätsverlag Potsdam, Potsdam.Google Scholar
- Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 81. Google ScholarDigital Library
- Lars Lischke, Valentin Schwind, Kai Friedrich, Albrecht Schmidt, and Niels Henze. 2016. MAGIC-Pointing on Large High-Resolution Displays. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '16). ACM, New York, NY, USA. Google ScholarDigital Library
- Sven Mayer, Valentin Schwind, Robin Schweigert, and Niels Henze. 2018. The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual Environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). ACM, New York, NY, USA, Article 653. Google ScholarDigital Library
- Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA. Google ScholarDigital Library
- Mark R. Mine. 1995. Virtual Environment Interaction Techniques. Technical Report. University of North Carolina at Chapel Hill, Chapel Hill, NC, USA. Google Scholar
- Mark R. Mine, Frederick P. Brooks Jr., and Carlo H. Sequin. 1997. Moving Objects in Space: Exploiting Proprioception in Virtual-environment Interaction. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '97). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA. Google ScholarDigital Library
- Kai Nickel and Rainer Stiefelhagen. 2003. Pointing Gesture Recognition Based on 3D-tracking of Face, Hands and Head Orientation. In Proceedings of the 5th International Conference on Multimodal Interfaces (ICMI '03). ACM, New York, NY, USA. Google ScholarDigital Library
- Jeffrey S. Pierce, Andrew S. Forsberg, Matthew J. Conway, Seung Hong, Robert C. Zeleznik, and Mark R. Mine. 1997. Image Plane Interaction Techniques in 3D Immersive Environments. In Proceedings of the 1997 Symposium on Interactive 3D Graphics (I3D '97). ACM, New York, NY, USA. Google ScholarDigital Library
- A Poston. 2000. Human engineering design data digest. Washington, DC: Department of Defense Human Factors Engineering Technical Advisory Group (2000).Google Scholar
- Felix Schüssel, Johannes Bäurle, Simon Kotzka, Michael Weber, Ferdinand Pittino, and Anke Huckauf. 2016. Design and Evaluation of a Gaze Tracking System for Free-space Interaction. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct (UbiComp '16). ACM, New York, NY, USA. Google ScholarDigital Library
- Loren Arthur Schwarz, Artashes Mkhitaryan, Diana Mateus, and Nassir Navab. 2012. Human Skeleton Tracking from Depth Data using Geodesic Distances and Optical Flow. Image and Vision Computing (2012).Google Scholar
- Jayson Turner, Jason Alexander, Andreas Bulling, and Hans Gellersen. 2015. Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA. Google ScholarDigital Library
- Roel Vertegaal. 2008. A Fitts Law Comparison of Eye Tracking and Manual Input in the Selection of Visual Targets. In Proceedings of the 10th International Conference on Multimodal Interfaces (ICMI '08). ACM, New York, NY, USA. Google ScholarDigital Library
- Daniel Vogel and Ravin Balakrishnan. 2005. Distant Freehand Pointing and Clicking on Very Large, High Resolution Displays. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology (UIST '05). ACM, New York, NY, USA. Google ScholarDigital Library
- Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA. Google ScholarDigital Library
- Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2019. Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications. In Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). Google ScholarDigital Library
Index Terms
- EyePointing: A Gaze-Based Selection Technique
Recommendations
Eye&Head: Synergetic Eye and Head Movement for Gaze Pointing and Selection
UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and TechnologyEye gaze involves the coordination of eye and head movement to acquire gaze targets, but existing approaches to gaze pointing are based on eye-tracking in abstraction from head motion. We propose to leverage the synergetic movement of eye and head, and ...
MAGIC pointing for eyewear computers
ISWC '15: Proceedings of the 2015 ACM International Symposium on Wearable ComputersIn this paper, we propose a combination of head and eye movements for touchlessly controlling the "mouse pointer" on eyewear devices, exploiting the speed of eye pointing and accuracy of head pointing. The method is a wearable computer-targeted ...
Gaze-Supported Gaming: MAGIC Techniques for First Person Shooters
CHI PLAY '15: Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in PlayMAGIC--Manual And Gaze Input Cascaded-pointing techniques have been proposed as an efficient way in which the eyes can support the mouse input in pointing tasks. MAGIC Sense is one of such techniques in which the cursor speed is modulated by how far it ...
Comments