Abstract
This article presents Gesture Interaction DEsigner (GIDE), an innovative application for gesture recognition. Instead of recognizing gestures only after they have been entirely completed, as happens in classic gesture recognition systems, GIDE exploits the full potential of gestural interaction by tracking gestures continuously and synchronously, allowing users to both control the target application moment to moment and also receive immediate and synchronous feedback about system recognition states. By this means, they quickly learn how to interact with the system in order to develop better performances. Furthermore, rather than learning the predefined gestures of others, GIDE allows users to design their own gestures, making interaction more natural and also allowing the applications to be tailored by users' specific needs. We describe our system that demonstrates these new qualities—that combine to provide fluid gesture interaction design—through evaluations with a range of performers and artists.
- Daniel Ashbrook and Thad Starner. 2010. MAGIC: A motion gesture design tool. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI'10). Elizabeth D. Mynatt, Don Schoner, Geraldine Fitzpatrick, Scott E. Hudson, W. Keith Edwards, and Tom Rodden (Eds.). ACM, 2159--2168. Google ScholarDigital Library
- Frederic Bevilacqua, Florence Baschet, and Serge Lemouton. 2012. The Augmented string quartet: Experiments and gesture following. Journal of New Music Research 41, 1 (2012), 103--119.Google ScholarCross Ref
- Frederic Bevilacqua, Fabrice Guédy, Norbert Schnell, Emmanuel Fléty, and Nicolas Leroy. 2007. Wireless sensor interface and gesture-follower for music pedagogy. In Proceedings of the International Conference on New interfaces for Musical Expression (NIME'07). New York, NY, 124--129. Google ScholarDigital Library
- Frederic Bevilacqua, Norbert Schnell, Nicolas Rasamimanana, Bruno Zamborlin, and Fabrice Guedy. 2010a. Online gesture analysis and control of audio processing. In Musical Robots and Interactive Multimodal Systems. Springer Tracts in Advanced Robotics.Google Scholar
- F. Bevilacqua, Bruno Zamborlin, Anthony Sypniewski, Norbert Schnell, F. Guédy, and Nicolas Rasamimanana. 2010b. Continuous realtime gesture following and recognition. In Proceedings of the 8th International Gesture Workshop. 73--84. Google ScholarDigital Library
- Richard Bowden, David Windridge, Timor Kadir, Andrew Zisserman, and Michael Brady. 2004. A Linguistic Feature Vector for the Visual Interpretation of Sign Language. Springer-Verlag, 391--401.Google Scholar
- Richard Bowden, Andrew Zisserman, Timor Kadir, and Mike Brady. 2003. Vision based Interpretation of Natural Sign Languages. In Proceedings of the 3rd International Conference on Computer Vision Systems. ACM, 391--401.Google Scholar
- Xiang Cao and Ravin Balakrishnan. 2003. VisionWand: Interaction techniques for large displays using a passive wand tracked in 3D. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (UIST'03). ACM, New York, NY, 173--182. Google ScholarDigital Library
- M. Csikszentmihalyi. 2008. Flow: The Psychology of Optimal Experience. HarperCollins.Google Scholar
- Y. Dai. 2001. An associate memory model of facial expressions and its applications in facial expression recognition of patients on bed. In Proceedings of the IEEE International Conference Multimedia Expo. 772--775.Google ScholarCross Ref
- Jerry Fails and Dan Olsen. 2003a. A design tool for camera-based interaction. In Proceedings of the Conference on Human Factors in Computing Systems (CHI'03). 449. Google ScholarDigital Library
- Jerry Alan Fails and Dan R. Olsen Jr. 2003b. Interactive machine learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (IUI'03). ACM, New York, NY, 39--45. Google ScholarDigital Library
- Rebecca Fiebrink. 2010. Real-time interaction with supervised learning. In Proceedings of the 28th of the International Conference Extended Abstracts on Human Factors in Computing Systems (CHIEA'10). 2935. Google ScholarDigital Library
- Rebecca Fiebrink, Perry R. Cook, and Daniel Trueman. 2011. Human model evaluation in interactive supervised learning. Machine Learning (2011), 147--156. Google ScholarDigital Library
- François Guimbretière, Maureen Stone, and Terry Winograd. 2001. Fluid interaction with high-resolution wall-size displays. In Proceedings of the 14th Annual ACM Ssymposium on User Interface Software and Technology (UIST'01). ACM, New York, NY, 21--30. Google ScholarDigital Library
- Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R. Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'07). 145. Google ScholarDigital Library
- Ken Hinckley, Gonzalo Ramos, Francois Guimbretiere, Patrick Baudisch, and Marc Smith. 2004. Stitching: Pen gestures that span multiple displays. In Proceedings of the International Working Conference on ACM Advanced Visual Interfaces (AVI'04). Google ScholarDigital Library
- Caroline Hummels, Kees C. J. Overbeeke, and Sietske Klooster. 2006. Move to get moved: A search for methods, tools and knowledge to design for expressive and rich movement-based interaction. Personal and Ubiquitous Computing 11, 8 (Nov. 2006), 677--690. Google ScholarDigital Library
- Amy K. Karlson and Benjamin B. Bederson. 2005. Applens and launchtile: Two designs for one-handed thumb use on small devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'05). ACM, 201--210. Google ScholarDigital Library
- Sven Kratz and Rafael Ballagas. 2009. Unravelling seams: Improving mobile gesture recognition with visual feedback techniques. Techniques (2009), 937--940. Google ScholarDigital Library
- Yang Li, Jaime Ruiz, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 197--206. Google ScholarDigital Library
- J. Linjama, P. Korpipää, Juha Kela, and T. Rantakokko. 2008. ActionCube: A tangible mobile gesture interaction tutorial. In Proceedings of the 2nd International Conference on Tangible and Embedded Interaction. ACM, 169--172. Google ScholarDigital Library
- Hao Lü and Yang Li. 2012. Gesture coder: A tool for programming multi-touch gestures by demonstration. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI'12). ACM, New York, NY, 2875--2884. Google ScholarDigital Library
- S. Mitra and T. Acharya. 2007. Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 37, 3 (2007), 311--324. Google ScholarDigital Library
- Meredith Ringel Morris, Anqi Huang, Andreas Paepcke, and Terry Winograd. 2006. Cooperative gestures: Multi-user gestural interactions for co-located groupware. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. ACM, 1201--1210. Google ScholarDigital Library
- Otniel Portillo-Rodriguez, Oscar Osvaldo Sandoval-Gonzalez, Emanuele Ruffaldi, Rosario Leonardi, Carlo Alberto Avizzano, and Massimo Bergamasco. 2008. Real-time gesture recognition, evaluation and feed-forward correction of a multimodal tai-chi platform. In Proceedings of the 3rd International Workshop on Haptic and Audio Interaction Design (HAID'08). Springer, 30--39. Google ScholarDigital Library
- L. R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77, 2 (1989), 257--286.Google ScholarCross Ref
- Norbert Schnell, Axel Röbel, Diemo Schwarz, Geoffroy Peeters, and Riccardo Borghesi. 2009. MuBu and friends: Assembling tools for content based real-time interactive audio processing in Max/MSP. In Proceedings of the International Computer Music Conference (ICMC'09). Google ScholarDigital Library
- P. Turaga, R. Chellappa, VS Subrahmanian, and O. Udrea. 2008. Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video Technology 18, 11 (2008), 1473--1488. Google ScholarDigital Library
- Y. Visell and J. Cooperstock. 2007. Enabling gestural interaction by means of tracking dynamical systems models and assistive feedback. In Proceedings of the IEEE International Conference oSystems, Man and Cybernetics (ISIC'07). 3373--3378.Google Scholar
- M. M. Wanderley and P. Depalle. 2004. Gestural control of sound synthesis. Proc. IEEE 92, 4 (April 2004), 632--644. Google ScholarDigital Library
- David Wessel and Matthew Wright. 2002. Problems and prospects for intimate musical control of computers. Computer Music Journal 26, 3 (Sept. 2002), 11--22. Google ScholarDigital Library
- Tracy Westeyn, Helene Brashear, Amin Atrash, and Thad Starner. 2003. Georgia Tech gesture toolkit: Supporting experiments in gesture recognition. In Proceedings of the 5th International Conference on Multimodal Interfaces. ACM, 85--92. Google ScholarDigital Library
- John Williamson. 2006. Continuous Uncertain Interaction. PhD dissertation, University of Glasgow, Scotland. Google ScholarDigital Library
- A. D. Wilson and A. F. Bobick. 1999. Parametric Hidden Markov Models for gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 9 (1999), 884--900. Google ScholarDigital Library
- I.H. Witten and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Vol. 54. Morgan Kaufmann. Google ScholarDigital Library
- Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI'09). 1083. Google ScholarDigital Library
- Matthew Wright, Adrian Freed, and Ali Momeni. 2003. OpenSound control: State of the art 2003. In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME'03). 153--160. Google ScholarDigital Library
Index Terms
Fluid gesture interaction design: Applications of continuous recognition for the design of modern gestural interfaces
Recommendations
3D gesture interaction for handheld augmented reality
SA '14: SIGGRAPH Asia 2014 Mobile Graphics and Interactive ApplicationsIn this paper, we present a prototype for exploring natural gesture interaction with Handheld Augmented Reality (HAR) applications, using visual tracking based AR and freehand gesture based interaction detected by a depth camera. We evaluated this ...
Contents-aware gesture interaction using wearable motion sensor
ISWC '14 Adjunct: Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct ProgramGesture interaction has become a major role as intuitive control of remote devices. Motion-based hand gesture recognition using a wearable motion sensor equipped on the wrist-band helps decreasing recognition errors compared with that of video-based ...
Multi-scale gestural interaction for augmented reality
SA '17: SIGGRAPH Asia 2017 Mobile Graphics & Interactive ApplicationsWe present a multi-scale gestural interface for augmented reality applications. With virtual objects, gestural interactions such as pointing and grasping can be convenient and intuitive, however they are imprecise, socially awkward, and susceptible to ...
Comments