skip to main content
research-article

Improving User Experience of Eye Tracking-Based Interaction: Introspecting and Adapting Interfaces

Published:02 November 2019Publication History
Skip Abstract Section

Abstract

Eye tracking systems have greatly improved in recent years, being a viable and affordable option as digital communication channel, especially for people lacking fine motor skills. Using eye tracking as an input method is challenging due to accuracy and ambiguity issues, and therefore research in eye gaze interaction is mainly focused on better pointing and typing methods. However, these methods eventually need to be assimilated to enable users to control application interfaces. A common approach to employ eye tracking for controlling application interfaces is to emulate mouse and keyboard functionality. We argue that the emulation approach incurs unnecessary interaction and visual overhead for users, aggravating the entire experience of gaze-based computer access. We discuss how the knowledge about the interface semantics can help reducing the interaction and visual overhead to improve the user experience. Thus, we propose the efficient introspection of interfaces to retrieve the interface semantics and adapt the interaction with eye gaze. We have developed a Web browser, GazeTheWeb, that introspects Web page interfaces and adapts both the browser interface and the interaction elements on Web pages for gaze input. In a summative lab study with 20 participants, GazeTheWeb allowed the participants to accomplish information search and browsing tasks significantly faster than an emulation approach. Additional feasibility tests of GazeTheWeb in lab and home environment showcase its effectiveness in accomplishing daily Web browsing activities and adapting large variety of modern Web pages to suffice the interaction for people with motor impairment.

References

  1. Kiyohiko Abe, Kosuke Owada, Shoichi Ohi, and Minoru Ohyama. 2008. A system for Web browsing by eye-gaze input. Electronics and Communications in Japan 91, 5 (2008), 11--18. DOI:https://doi.org/10.1002/ecj.10110Google ScholarGoogle ScholarCross RefCross Ref
  2. Sheetal K. Agarwal, Anupam Jain, Arun Kumar, and Nitendra Rajput. 2010. The world wide telecom web browser. In Proceedings of the 1st ACM Symposium on Computing for Development (ACM DEV’10). ACM, New York, NY, Article 4, 9 pages. DOI:https://doi.org/10.1145/1926180.1926185Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Deepak Ahya and Daniel Baudino. 2006. Method to enhance user interface and target applications based on context awareness. US Patent App. 10/853,947.Google ScholarGoogle Scholar
  4. Pierre A. Akiki, Arosha K. Bandara, and Yijun Yu. 2016. Engineering adaptive model-driven user interfaces. IEEE Transactions on Software Engineering 42, 12 (Dec. 2016), 1118--1147. DOI:https://doi.org/10.1109/TSE.2016.2553035Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Michael Ashmore, Andrew T. Duchowski, and Garth Shoemaker. 2005. Efficient eye pointing with a fisheye lens. In Proceedings of Graphics Interface 2005 (GI’05). Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 203--210. Retrieved from http://dl.acm.org/citation.cfm?id=1089508.1089542.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Richard Bates and Howell Istance. 2002. Zooming interfaces!: Enhancing the performance of eye controlled pointing devices. In Proceedings of the 5th International ACM Conference on Assistive Technologies (Assets’02). ACM, New York, NY, 119--126. DOI:https://doi.org/10.1145/638249.638272Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Wolfgang Beinhauer. 2006. A widget library for gaze-based interaction elements. In Proceedings of the 2006 Symposium on Eye Tracking Research 8 Applications (ETRA’06). ACM, New York, NY, 53--53. DOI:https://doi.org/10.1145/1117309.1117338Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Michael Bensch, Ahmed A. Karim, Jürgen Mellinger, Thilo Hinterberger, Michael Tangermann, Martin Bogdan, Wolfgang Rosenstiel, and Niels Birbaumer. 2007. Nessi: An EEG-controlled web browser for severely paralyzed patients. Computational Intelligence and Neuroscience 2007 (2007), 6:1--6:10. DOI:10.1155/2007/71863Google ScholarGoogle ScholarCross RefCross Ref
  9. Ralf Biedert, Georg Buscher, Sven Schwarz, Manuel Möller, Andreas Dengel, and Thomas Lottermann. 2010. The text 2.0 framework: Writing web-based gaze-controlled realtime applications quickly and easily. In Proceedings of the 2010 Workshop on Eye Gaze in Intelligent Human Machine Interaction (EGIHMI’10). ACM, New York, NY, 114--117. DOI:https://doi.org/10.1145/2002333.2002351Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Renaud Blanch, Yves Guiard, and Michel Beaudouin-Lafon. 2004. Semantic pointing: Improving target acquisition with control-display ratio adaptation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’04). ACM, New York, NY, 519--526. DOI:https://doi.org/10.1145/985692.985758Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. John Brooke. 2013. SUS: A retrospective. Journal of Usability Studies 8, 2 (Feb. 2013), 29--40. DOI: http://dl.acm.org/citation.cfm?id=2817912.2817913Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Brian Burg, Andrew J. Ko, and Michael D. Ernst. 2015. Explaining visual changes in web interfaces. In Proceedings of the 28th Annual ACM Symposium on User Interface Software Technology (UIST’15). ACM, New York, NY, 259--268. DOI:https://doi.org/10.1145/2807442.2807473Google ScholarGoogle Scholar
  13. George Candea, Mauricio Delgado, Michael Chen, and Armando Fox. 2003. Automatic failure-path inference: A generic introspection technique for internet applications. In Proceedings of the 3rd IEEE Workshop on Internet Applications (WIAPP’03). IEEE Computer Society, Washington, DC, 132--141. Retrieved from http://dl.acm.org/citation.cfm?id=832311.837386.Google ScholarGoogle ScholarCross RefCross Ref
  14. Ellis Carolyn. 1991. Sociological introspection and emotional experience. Symbolic Interaction 14, 1 (1991), 23--50. DOI:https://doi.org/10.1525/si.1991.14.1.23Google ScholarGoogle ScholarCross RefCross Ref
  15. Scott Carter, Amy Hurst, Jennifer Mankoff, and Jack Li. 2006. Dynamically adapting GUIs to diverse input devices. In Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility (Assets’06). ACM, New York, NY, 63--70. DOI:https://doi.org/10.1145/1168987.1169000Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Chia-Hui Chang, Mohammed Kayed, Moheb R. Girgis, and Khaled F. Shaalan. 2006. A survey of web information extraction systems. IEEE Transactions on Knowledge and Data Engineering 18, 10 (2006), 1411--1428.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Zhaokang Chen and Bertram E. Shi. 2019. Using variable dwell time to accelerate gaze-based web browsing with two-step selection. International Journal of Human—Computer Interaction 35, 3 (2019), 240--255. DOI:https://doi.org/10.1080/10447318.2018.1452351Google ScholarGoogle Scholar
  18. Albert M. Cook and Janice Miller Polgar. 2014. Assistive Technologies-E-Book: Principles and Practice. Elsevier Health Sciences.Google ScholarGoogle Scholar
  19. Daniel K. Davies, Steven E. Stock, and Michael L. Wehmeyer. 2001. Enhancing independent internet access for individuals with mental retardation through use of a specialized web browser: A pilot study. Education and Training in Mental Retardation and Developmental Disabilities 36, 1 (2001), 107--113. DOI: http://www.jstor.org/stable/24481620Google ScholarGoogle Scholar
  20. Vagner Figueredo de Santana, Rosimeire de Oliveira, Leonelo Dell Anhol Almeida, and Marcia Ito. 2013. Firefixia: An accessibility web browser customization toolbar for people with dyslexia. In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility (W4A’13). ACM, New York, NY, Article 16, 4 pages. DOI:https://doi.org/10.1145/2461121.2461137Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Antonio Diaz-Tula and Carlos H. Morimoto. 2016. AugKey: Increasing foveal throughput in eye typing with augmented keys. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 3533--3544. DOI:https://doi.org/10.1145/2858036.2858517Google ScholarGoogle Scholar
  22. Morgan Dixon and James Fogarty. 2010. Prefab: Implementing advanced behaviors using pixel-based reverse engineering of interface structure. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, NY, 1525--1534. DOI:https://doi.org/10.1145/1753326.1753554Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Morgan Dixon, James Fogarty, and Jacob Wobbrock. 2012. A general-purpose target-aware pointing enhancement using pixel-level analysis of graphical interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’12). ACM, New York, NY, 3167--3176. DOI:https://doi.org/10.1145/2207676.2208734Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Tobii Dynavox. 2017. Photograph of computer system with Tobii eye tracker and running Tobii Windows Control software. Retrieved from http://www.tobiidynavox.de/wp-content/uploads/2016/06/TobiiDynavox_EyeMobileMini_front_-1030x687.png.Google ScholarGoogle Scholar
  25. Anna Maria Feit, Shane Williams, Arturo Toledo, Ann Paradiso, Harish Kulkarni, Shaun Kane, and Meredith Ringel Morris. 2017. Toward everyday gaze input: Accuracy and precision of eye tracking and implications for design. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 1118--1130. DOI:https://doi.org/10.1145/3025453.3025599Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Leah Findlater, Alex Jansen, Kristen Shinohara, Morgan Dixon, Peter Kamb, Joshua Rakita, and Jacob O. Wobbrock. 2010. Enhanced area cursors: Reducing fine pointing demands for people with motor impairments. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology (UIST’10). ACM, New York, NY 153--162. DOI:https://doi.org/10.1145/1866029.1866055Google ScholarGoogle Scholar
  27. Sofia Fountoukidou, Jaap Ham, Uwe Matzat, and Cees Midden. 2018. Using an artificial agent as a behavior model to promote assistive technology acceptance. In Persuasive Technology. Jaap Ham, Evangelos Karapanos, Plinio P. Morita, and Catherine M. Burns (Eds.), Springer International Publishing, Cham, 285--296. DOI: https://doi.org/10.1007/978-3-319-78978-1_24Google ScholarGoogle Scholar
  28. Krzysztof Z. Gajos. 2008. Automatically Generating Personalized User Interfaces. University of Washington.Google ScholarGoogle Scholar
  29. Aryeh Gregor, Ms2ger, Alex Russell, Robin Berjon, and Anne van Kesteren. 2015. W3C DOM4. W3C Recommendation. W3C. Retrieved from http://www.w3.org/TR/2015/REC-dom-20151119/.Google ScholarGoogle Scholar
  30. Tovi Grossman and Ravin Balakrishnan. 2005. The bubble cursor: Enhancing target acquisition by dynamic resizing of the cursor’s activation area. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’05). ACM, New York, NY, 281--290. DOI:https://doi.org/10.1145/1054972.1055012Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Human Performance Research Group. 1988. Nasa Task Load Index (TLX): Paper and Pencil Package. Retrieved May 2, 2016 from http://humansystems.arc.nasa.gov/groups/tlx/downloads/TLX_pappen_manual.pdf.Google ScholarGoogle Scholar
  32. Vicki L. Hanson, Jonathan P. Brezin, Susan Crayne, Simeon Keates, Rick Kjeldsen, John T. Richards, Calvin Swart, and Shari Trewin. 2005. Improving web accessibility through an enhanced open-source browser. IBM Systems Journal 44, 3 (Aug. 2005), 573--588. DOI:https://doi.org/10.1147/sj.443.0573Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Katarzyna Harezlak, Pawel Kasprowski, and Mateusz Stasch. 2014. Towards accurate eye tracker calibration - methods and procedures. Procedia Computer Science 35 (2014), 1073--1081. DOI:https://doi.org/10.1016/j.procs.2014.08.194Google ScholarGoogle ScholarCross RefCross Ref
  34. Simon Harper and Yeliz Yesilada. 2008. Web Accessibility: A Foundation for Research. Springer Science 8 Business Media.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’99). ACM, New York, NY159--166. DOI:https://doi.org/10.1145/302979.303030Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Visual Interactive. 2017. myGaze Power catalogue. Retrieved from http://www.mygaze.com/fileadmin/download/mygaze_power/myGaze_Power_catalogue.pdf.Google ScholarGoogle Scholar
  37. Howell Istance, Richard Bates, Aulikki Hyrskykari, and Stephen Vickers. 2008. Snap clutch, a moded approach to solving the midas touch problem. In Proceedings of the 2008 Symposium on Eye Tracking Research 8 Applications (ETRA’08). ACM, New York, NY, 221--228. DOI:https://doi.org/10.1145/1344471.1344523Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Rob Jacob and Sophie Stellmach. 2016. What you look at is what you get: Gaze-based user interfaces. Interactions 23, 5 (Aug. 2016), 62--65. DOI:https://doi.org/10.1145/2978577Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Robert J. K. Jacob. 1990. What you look at is what you get: Eye movement-based interaction techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’90). ACM, New York, NY, 11--18. DOI:https://doi.org/10.1145/97243.97246Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Søren Staal Jensen and Tina Øvad. 2016. Optimizing web-accessibility for deaf people and the hearing impaired utilizing a sign language dictionary embedded in a browser. Cognition, Technology 8 Work 18, 4 (1 Nov. 2016), 717--731. DOI:https://doi.org/10.1007/s10111-016-0385-zGoogle ScholarGoogle Scholar
  41. Fotis Kalaganis, Elisavet Chatzilari, Spiros Nikolopoulos, Yiannis Kompatsiaris, and Nikos Laskaris. 2018. An error-aware gaze-based keyboard by means of a hybrid BCI system. Scientific Reports 8, 1, Article 13176 (2018). DOI:https://doi.org/10.1038/s41598-018-31425-2Google ScholarGoogle Scholar
  42. Ahmed A. Karim, Thilo Hinterberger, Jürgen Richter, Jürgen Mellinger, Nicola Neumann, Herta Flor, Andrea Kübler, and Niels Birbaumer. 2006. Neural internet: Web surfing with brain potentials for the completely paralyzed. Neurorehabilitation and Neural Repair 20, 4 (2006), 508--515.Google ScholarGoogle ScholarCross RefCross Ref
  43. Melanie Kellar, Carolyn Watters, and Michael Shepherd. 2006. The impact of task on the usage of web browser navigation mechanisms. In Proceedings of Graphics Interface 2006 (GI’06). Canadian Information Processing Society, Toronto, Ontario, Canada, 235--242. Retrieved from http://dl.acm.org/citation.cfm?id=1143079.1143118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kurt Koffka. 1924. Introspection and the method of psychology. British Journal of Psychology 15, 2 (1924), 149--161. DOI:https://doi.org/10.1111/j.2044-8295.1924.tb00170.xGoogle ScholarGoogle Scholar
  45. Chandan Kumar, Raphael Menges, Daniel Müller, and Steffen Staab. 2017. Chromium based framework to include gaze interaction in web browser. In Proceedings of the 26th International Conference on World Wide Web Companion (WWW’17 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 219--223. DOI:https://doi.org/10.1145/3041021.3054730Google ScholarGoogle Scholar
  46. Chandan Kumar, Raphael Menges, and Steffen Staab. 2016. Eye-controlled interfaces for multimedia interaction. IEEE MultiMedia 23, 4 (Oct. 2016), 6--13.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Chandan Kumar, Raphael Menges, and Steffen Staab. 2017. Assessing the usability of gaze-adapted interface against conventional eye-based input emulation. In Proceedings of the IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS’17). 793--798. DOI:https://doi.org/10.1109/CBMS.2017.155Google ScholarGoogle ScholarCross RefCross Ref
  48. Manu Kumar, Tal Garfinkel, Dan Boneh, and Terry Winograd. 2007. Reducing shoulder-surfing by using gaze-based password entry. In Proceedings of the 3rd Symposium on Usable Privacy and Security (SOUPS’07). ACM, New York, NY, 13--19. DOI:https://doi.org/10.1145/1280680.1280683Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Manu Kumar, Jeff Klingner, Rohan Puranik, Terry Winograd, and Andreas Paepcke. 2008. Improving the accuracy of gaze input for interaction. In Proceedings of the 2008 Symposium on Eye Tracking Research 8 Applications (ETRA’08). ACM, New York, NY65--68. DOI:https://doi.org/10.1145/1344471.1344488Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Manu Kumar, Andreas Paepcke, Terry Winograd, and Terry Winograd. 2007. EyePoint: Practical pointing and selection using gaze and keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). ACM, New York, NY, 421--430. DOI:https://doi.org/10.1145/1240624.1240692Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Manu Kumar and Terry Winograd. 2007. GUIDe: Gaze-enhanced UI design. In Proceedings of the CHI’07 Extended Abstracts on Human Factors in Computing Systems (CHI EA’07). ACM, New York, NY, 1977--1982. DOI:https://doi.org/10.1145/1240866.1240935Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Andrew Kurauchi, Wenxin Feng, Ajjen Joshi, Carlos Morimoto, and Margrit Betke. 2016. EyeSwipe: Dwell-free text entry using gaze paths. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, NY, 1952--1956. DOI:https://doi.org/10.1145/2858036.2858335Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Fabrizio Lamberti, Gianluca Paravati, Valentina Gatteschi, and Alberto Cannavo. 2017. Supporting web analytics by aggregating user interaction data from heterogeneous devices using viewport-dom-based heat maps. IEEE Transactions on Industrial Informatics 13, 4 (2017), 1989--1999. DOI:https://doi.org/10.1109/TII.2017.2658663Google ScholarGoogle ScholarCross RefCross Ref
  54. Chris Lankford. 2000. Effective eye-gaze input into windows. In Proceedings of the 2000 Symposium on Eye Tracking Research 8 Applications (ETRA’00). ACM, New York, NY, 23--27. DOI:https://doi.org/10.1145/355017.355021Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Christof Lutteroth, Moiz Penkar, and Gerald Weber. 2015. Gaze vs. mouse: A fast and accurate gaze-only click alternative. In Proceedings of the 28th Annual ACM Symposium on User Interface Software; Technology (UIST’15). ACM, New York, NY, 385--394. DOI:https://doi.org/10.1145/2807442.2807461Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Yu-Seung Ma, Jeff Offutt, and Yong-Rae Kwon. 2006. MuJava: A mutation system for Java. In Proceedings of the 28th International Conference on Software Engineering (ICSE’06). ACM, New York, NY, 827--830. DOI:https://doi.org/10.1145/1134285.1134425Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. I. Scott MacKenzie. 1992. Fitts’ law as a research and design tool in human-computer interaction. Human-Computer Interaction 7, 1 (Mar. 1992), 91--139. DOI:https://doi.org/10.1207/s15327051hci0701_3Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. I. Scott MacKenzie. 2012. Evaluating eye tracking systems for computer input. In Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies. IGI Global, 205--225.Google ScholarGoogle Scholar
  59. Jalal U. Mahmud, Yevgen Borodin, and I. V. Ramakrishnan. 2007. Csurf: A context-driven non-visual web-browser. In Proceedings of the 16th International Conference on World Wide Web (WWW’07). ACM, New York, NY, 31--40. DOI:https://doi.org/10.1145/1242572.1242578Google ScholarGoogle Scholar
  60. Päivi Majaranta. 2009. Text Entry by Eye Gaze. University of Tampere.Google ScholarGoogle Scholar
  61. Päivi Majaranta, Hirotaka Aoki, Mick Donegan, Dan Witzner Hansen, and John Paulin Hansen. 2011. Gaze Interaction and Applications of Eye Tracking: Advances in Assistive Technologies (1st ed.). IGI Global, Hershey, PA.Google ScholarGoogle Scholar
  62. Jennifer Mankoff, Anind Dey, Udit Batra, and Melody Moore. 2002. Web accessibility for low bandwidth input. In Proceedings of the 5th International ACM Conference on Assistive Technologies (Assets’02). ACM, New York, NY, 17--24. DOI:https://doi.org/10.1145/638249.638255Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Raphael Menges, Chandan Kumar, Daniel Müller, and Korok Sengupta. 2017. GazeTheWeb: A gaze-controlled web browser. In Proceedings of the 14th Web for All Conference (W4A’17). ACM. DOI: http://dx.doi.org/10.1145/3058555.3058582Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Raphael Menges, Chandan Kumar, Korok Sengupta, and Steffen Staab. 2016. eyeGUI: A novel framework for eye-controlled user interfaces. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI’16). ACM, New York, NY, Article 121, 6 pages. DOI:https://doi.org/10.1145/2971485.2996756Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Raphael Menges, Chandan Kumar, Ulrich Wechselberger, Christoph Schaefer, Tina Walber, and Steffen Staab. 2017. Schau genau! A gaze-controlled 3D game for entertainment and education. Journal of Eye Movement Research 10 (2017), 220. DOI:https://bop.unibe.ch/JEMR/article/view/4182Google ScholarGoogle Scholar
  66. Raphael Menges, Hanadi Tamimi, Chandan Kumar, Tina Walber, Christoph Schaefer, and Steffen Staab. 2018. Enhanced representation of web pages for usability analysis with eye tracking. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research 8 Applications (ETRA’18). ACM, New York, NY, Article 18, 9 pages. DOI:https://doi.org/10.1145/3204493.3204535Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Jakob Nielsen and Rolf Molich. 1990. Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’90). ACM, New York, NY, 249--256. DOI:https://doi.org/10.1145/97243.97281Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Spiros Nikolopoulos, Panagiotis C. Petrantonakis, Kostas Georgiadis, Fotis Kalaganis, Georgios Liaros, Ioulietta Lazarou, Katerina Adam, Anastasios Papazoglou-Chalikias, Elisavet Chatzilari, Vangelis P. Oikonomou, Chandan Kumar, Raphael Menges, Steffen Staab, Daniel Müller, Korok Sengupta, Sevasti Bostantjopoulou, Zoe Katsarou, Gabi Zeilig, Meir Plotnik, Amihai Gotlieb, Racheli Kizoni, Sofia Fountoukidou, Jaap Ham, Dimitrios Athanasiou, Agnes Mariakaki, Dario Comanducci, Edoardo Sabatini, Walter Nistico, Markus Plank, and Ioannis Kompatsiaris. 2017. A multimodal dataset for authoring and editing multimedia content: The MAMEM project. Data in Brief 15 (2017), 1048--1056. DOI:https://doi.org/10.1016/j.dib.2017.10.072Google ScholarGoogle ScholarCross RefCross Ref
  69. Dan R. Olsen, Jr., Sean Jefferies, Travis Nielsen, William Moyes, and Paul Fredrickson. 2000. Cross-modal interaction using XWeb. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST’00). ACM, New York, NY, 191--200. DOI:https://doi.org/10.1145/354401.354764Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Marco Porta and Alessia Ravelli. 2009. WeyeB, an eye-controlled web browser for hands-free navigation. In Proceedings of the 2nd Conference on Human System Interactions (HSI’09). IEEE Press, Piscataway, NJ, 207--212. Retrieved from http://dl.acm.org/citation.cfm?id=1689359.1689396.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Simon Schenk, Marc Dreiser, Gerhard Rigoll, and Michael Dorr. 2017. GazeEverywhere: Enabling gaze-only user interaction on an unmodified desktop PC in everyday scenarios. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI’17). ACM, New York, NY, 3034--3044. DOI:https://doi.org/10.1145/3025453.3025455Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Korok Sengupta, Min Ke, Raphael Menges, Chandan Kumar, and Steffen Staab. 2018. Hands-free web browsing: Enriching the user experience with gaze and voice modality. In Proceedings of the 2018 ACM Symposium on Eye Tracking Research 8 Applications (ETRA’18). ACM, New York, NY, Article 88, 3 pages. DOI:https://doi.org/10.1145/3204493.3208338Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Korok Sengupta, Chandan Kumar, and Steffen Staab. 2017. Usability heuristics for eye-controlled user interfaces. In Proceedings of the 19th European Conference on Eye Movements. Retrieved from http://cogain2017.cogain.org/camready/poster3-Sengupta.pdf.Google ScholarGoogle Scholar
  74. Korok Sengupta, Raphael Menges, Chandan Kumar, and Steffen Staab. 2017. GazeTheKey: Interactive keys to integrate word predictions for gaze-based text entry. In Proceedings of the 22nd International Conference on Intelligent User Interfaces Companion (IUI’17 Companion). ACM, New York, NY, 121--124. DOI:https://doi.org/10.1145/3030024.3038259Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Korok Sengupta, Jun Sun, Raphael Menges, Chandan Kumar, and Steffen Staab. 2017. Analyzing the impact of cognitive load in evaluating gaze-based typing. In Proceedings of the 30th IEEE International Symposium on Computer-Based Medical Systems. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  76. Ben Shneiderman. 1997. Designing the User Interface: Strategies for Effective Human-Computer Interaction (3rd ed.). Addison-Wesley Longman Publishing Co., Inc., Boston, MA.Google ScholarGoogle Scholar
  77. Laurianne Sitbon, Oscar Wong, and Margot Brereton. 2014. Efficient web browsing with a single-switch. In Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures: The Future of Design (OzCHI’14). ACM, New York, NY, 515--518. DOI:https://doi.org/10.1145/2686612.2686693Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Jiguo Song and Gabriel Parmer. 2013. Toward predictable, efficient, system-level tolerance of transient faults. SIGBED Review 10, 4 (Dec. 2013), 53--56. DOI:https://doi.org/10.1145/2583687.2583700Google ScholarGoogle Scholar
  79. Wolfgang Stuerzlinger, Olivier Chapuis, Dusty Phillips, and Nicolas Roussel. 2006. User interface Façades: Towards fully adaptable user interfaces. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology (UIST’06). ACM, New York, NY, 309--318. DOI:https://doi.org/10.1145/1166253.1166301Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Julius Sweetland. 2016. Optikey: Type, Click, Speak. https://github.com/OptiKey/OptiKey.Google ScholarGoogle Scholar
  81. Jingtao Wang and Jennifer Mankoff. 2002. Theoretical and architectural support for input device adaptation. In Proceedings of the 2003 Conference on Universal Usability (CUU’03). ACM, 85--92. DOI:https://doi.org/10.1145/957205.957220Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Benjamin Wassermann, Adrian Hardt, and Gottfried Zimmermann. 2012. Generic gaze interaction events for web browsers using the eye tracker as input device. In Proceedings of the 2012 Workshop on Emerging Web Technologies at Conference on World Wide Web. DOI:https://doi.org/10.13140/2.1.1871.9362Google ScholarGoogle Scholar
  83. Tim Weninger, Rodrigo Palacios, Valter Crescenzi, Thomas Gottron, and Paolo Merialdo. 2016. Web content extraction: A metaanalysis of its past and thoughts on its future. SIGKDD Explorations Newsletter 17, 2 (Feb. 2016), 17--23. DOI:https://doi.org/10.1145/2897350.2897353Google ScholarGoogle ScholarDigital LibraryDigital Library
  84. Xuebai Zhang, Xiaolong Liu, Shyan-Ming Yuan, and Shu-Fan Lin. 2017. Eye tracking based control system for natural human-computer interaction. Computational Intelligence and Neuroscience 2017, Article 5739301 (2017), 9 pages. DOI:10.1155/2017/5739301Google ScholarGoogle Scholar
  85. Xuan Zhang and I. Scott MacKenzie. 2007. Evaluating eye tracking with ISO 9241 - Part 9. In Proceedings of the 12th International Conference on Human-computer Interaction: Intelligent Multimodal Interaction Environments (HCI’07). Springer-Verlag, Berlin, 779--788. Retrieved from http://dl.acm.org/citation.cfm?id=1769590.1769678.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Improving User Experience of Eye Tracking-Based Interaction: Introspecting and Adapting Interfaces

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Computer-Human Interaction
            ACM Transactions on Computer-Human Interaction  Volume 26, Issue 6
            December 2019
            230 pages
            ISSN:1073-0516
            EISSN:1557-7325
            DOI:10.1145/3371148
            Issue’s Table of Contents

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 2 November 2019
            • Revised: 1 May 2019
            • Accepted: 1 May 2019
            • Received: 1 August 2018
            Published in tochi Volume 26, Issue 6

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format