ABSTRACT
Mid-air hand gestural interaction has generally been researched as a pointing device. However, recent research has shown potential for text input with the use of word gesture keyboards (WGK), where these forms of interactions require the input system to identify when the gesture has started and when it has stopped. Previous research has had success where the same hand moved the cursor, and performed the activation gesture. In this paper we introduce bimanual interaction for gestural interaction to perform text input with WGK, where one hand moves the cursor while the other hand performs the activation. In our user studies, the bimanual method demonstrated significantly higher results than the state-of-the-art single handed method. We achieved 16 words per minute; about 39% higher than the benchmark, and with significantly lower error rates.
- Xiaojun Bi, Ciprian Chelba, Tom Ouyang, Kurt Partridge, and Shumin Zhai. 2012. Bimanual Gesture Keyboard. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST '12). ACM, 137--146. Google ScholarDigital Library
- Michelle A Brown, Wolfgang Stuerzlinger, and E. J. Mendonca Filho. 2014. The performance of uninstrumented in-air pointing. In Proceedings of Graphics Interface 2014. Canadian Information Processing Society, 59--66.Google Scholar
- Robert Coe. 2002. It's the effect size, stupid: What effect size is and why it is important. Paper presented at the British Educational Research Association annual conference, Exeter, UK (2002).Google Scholar
- Rita Francese, Ignazio Passero, and Genoveffa Tortora. 2012. Wiimote and Kinect: Gestural User Interfaces Add a Natural Third Dimension to HCI. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI '12). ACM, New York, NY, USA, 116--123. Google ScholarDigital Library
- Dustin Freeman, Ramadevi Vennelakanti, and Sriganesh Madhvanath. 2012. Freehand posebased Gestural Interaction: Studies and implications for interface design. In Intelligent Human Computer Interaction (IHCI), 2012 4th International Conference on. IEEE, 1--6.Google ScholarCross Ref
- Vittorio Fuccella, Mattia De Rosa, and Gennaro Costagliola. 2014. Novice and Expert Performance of KeyScretch: A Gesture-Based Text Entry Method for Touch-Screens. IEEE Transactions on HumanMachine Systems 44, 4 (2014), 511--523. Google ScholarCross Ref
- Darren Guinness, Alvin Jude, G Michael Poor, and Ashley Dover. 2015. Models for Rested Touchless Gestural Interaction. In Proceedings of the 3rd ACM Symposium on Spatial User Interaction. ACM, 34--43. Google ScholarDigital Library
- Sariel Har-Peled and Benjamin Raichel. 2014. The Fréchet Distance Revisited and Extended. ACM Trans. Algorithms 10, 1, Article 3 (Jan. 2014), 22 pages.Google ScholarDigital Library
- Alvin Jude, G Michael Poor, and Darren Guinness. 2014. Personal space: user defined gesture space for GUI interaction. In Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems. ACM, 1615--1620. Google ScholarDigital Library
- Per-Ola Kristensson. 2007. Discrete and Continuous Shape Writing for Text Entry and Control. Doctoral dissertation. Linkuping University, Sweden.Google Scholar
- Chien-Ho Janice Lin, Katherine J Sullivan, Allan D Wu, Shailesh Kantak, and Carolee J Winstein. 2007. Effect of task practice order on motor skill learning in adults with Parkinson disease: a pilot study. Physical therapy 87, 9 (2007), 1120--1131. Google ScholarCross Ref
- I. Scott MacKenzie and R. William Soukoreff. 2002. A Character-level Error Analysis Technique for Evaluating Text Entry Methods. In Proceedings of the Second Nordic Conference on Humancomputer Interaction (NordiCHI '02). ACM, New York, NY, USA, 243--246. Google ScholarDigital Library
- I. Scott MacKenzie and Kumiko Tanaka-Ishii. 2007. Text Entry Systems: Mobility, Accessibility, Universality. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.Google ScholarDigital Library
- Shahzad Malik and Joe Laszlo. 2004. Visual touchpad: a two-handed gestural input device. In Proceedings of the 6th international conference on Multimodal interfaces. ACM, 289--296. Google ScholarDigital Library
- Anders Markussen, Mikkel Rønne Jakobsen, and Kasper Hornbæk. 2014. Vulture: A Mid-Air WordGesture Keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14). ACM, 1073--1082. Google ScholarDigital Library
- Matthias Schwaller, Simon Brunner, and Denis Lalanne. 2013. Two handed mid-air gestural hci: Point+ command. In International Conference on Human-Computer Interaction. Springer, 388--397. Google ScholarDigital Library
- R. William Soukoreff and I. Scott MacKenzie. 2003. Metrics for Text Entry Research: An Evaluation of MSD and KSPC, and a New Unified Error Metric. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). ACM, New York, NY, USA, 113--120.Google ScholarDigital Library
- Katherine J Sullivan, Barbara J Knowlton, and Bruce H Dobkin. 2002. Step training with body weight support: effect of treadmill speed and practice paradigms on poststroke locomotor recovery. Archives of physical medicine and rehabilitation 83, 5 (2002), 683--691.Google Scholar
- Andries Van Dam. 1997. Post-WIMP user interfaces. Commun. ACM 40, 2 (1997), 63--67. Google ScholarDigital Library
- Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap Gestures for TV: Insights from an Elicitation Study. In Proceedings of the 2014 ACM International Conference on Interactive Experiences for TV and Online Video (TVX '14). ACM, New York, NY, USA, 131--138. Google ScholarDigital Library
- Juan Wachs, Helman Stern, Yael Edan, Michael Gillam, Craig Feied, Mark Smith, and Jon Handler. 2007. Gestix: a doctor-computer sterile gesture interface for dynamic environments. In Soft Computing in Industrial Applications. Springer, 30-- 39. Google ScholarCross Ref
- Robert Walter, Gilles Bailly, and Jörg Müller. 2013. StrikeAPose: revealing mid-air gestures on public displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 841--850. Google ScholarDigital Library
- Andrew D Wilson and Edward Cutrell. 2005. FlowMouse: a computer vision-based pointing and gesture input device. In IFIP Conference on Human-Computer Interaction. Springer, 565--578.Google ScholarDigital Library
- Shumin Zhai and Per-Ola Kristensson. 2003. Shorthand Writing on Stylus Keyboard. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). ACM, New York, NY, USA, 97--104. Google ScholarDigital Library
- Shumin Zhai and Per-Ola Kristensson. 2004. SHARK2: A Large Vocabulary Shorthand Writing System for Pen-based Computers. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology (UIST '04). ACM, 43--52.Google Scholar
- Shumin Zhai and Per Ola Kristensson. 2012. The Word-gesture Keyboard: Reimagining Keyboard Interaction. Commun. ACM 55, 9 (2012), 91--101. Google ScholarDigital Library
- Shumin Zhai, Per-Ola Kristensson, Pengjun Gong, Michael Greiner, Shilei Allen Peng, Liang Mico Liu, and Anthony Dunnigan. 2009. Shapewriter on the Iphone: From the Laboratory to the Real World. In CHI '09 Extended Abstracts on Human Factors in Computing Systems (CHI EA '09). ACM, 2667-- 2670. Google ScholarDigital Library
- Shumin Zhai, Alison Sue, and Johnny Accot. 2002. Movement Model, Hits Distribution and Learning in Virtual Keyboarding. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '02). ACM, New York, NY, USA, 17-- 24. Google ScholarDigital Library
Index Terms
Bimanual Word Gesture Keyboards for Mid-air Gestures
Recommendations
Eliciting Mid-Air Gestures for Wall-Display Interaction
NordiCHI '16: Proceedings of the 9th Nordic Conference on Human-Computer InteractionFreehand mid-air gestures are a promising input method for interacting with wall displays. However, work on mid-air gestures for wall-display interaction has mainly explored what is technically possible, which might not result in gestures that users ...
Modeling Mid-air Gestures With Spherical Coordinates
SUI '15: Proceedings of the 3rd ACM Symposium on Spatial User InteractionGenerally, touchless mid-air gestural interaction use some form of Cartesian coordinate system within the input space. Most implementations map the input space in 3-D and map it to the 2-D of the monitor for output. In our previous work we showed that ...
Bimanual gesture keyboard
UIST '12: Proceedings of the 25th annual ACM symposium on User interface software and technologyGesture keyboards represent an increasingly popular way to input text on mobile devices today. However, current gesture keyboards are exclusively unimanual. To take advantage of the capability of modern multi-touch screens, we created a novel bimanual ...
Comments