skip to main content
research-article

Fluid gesture interaction design: Applications of continuous recognition for the design of modern gestural interfaces

Published:01 January 2014Publication History
Skip Abstract Section

Abstract

This article presents Gesture Interaction DEsigner (GIDE), an innovative application for gesture recognition. Instead of recognizing gestures only after they have been entirely completed, as happens in classic gesture recognition systems, GIDE exploits the full potential of gestural interaction by tracking gestures continuously and synchronously, allowing users to both control the target application moment to moment and also receive immediate and synchronous feedback about system recognition states. By this means, they quickly learn how to interact with the system in order to develop better performances. Furthermore, rather than learning the predefined gestures of others, GIDE allows users to design their own gestures, making interaction more natural and also allowing the applications to be tailored by users' specific needs. We describe our system that demonstrates these new qualities—that combine to provide fluid gesture interaction design—through evaluations with a range of performers and artists.

References

  1. Daniel Ashbrook and Thad Starner. 2010. MAGIC: A motion gesture design tool. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI'10). Elizabeth D. Mynatt, Don Schoner, Geraldine Fitzpatrick, Scott E. Hudson, W. Keith Edwards, and Tom Rodden (Eds.). ACM, 2159--2168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Frederic Bevilacqua, Florence Baschet, and Serge Lemouton. 2012. The Augmented string quartet: Experiments and gesture following. Journal of New Music Research 41, 1 (2012), 103--119.Google ScholarGoogle ScholarCross RefCross Ref
  3. Frederic Bevilacqua, Fabrice Guédy, Norbert Schnell, Emmanuel Fléty, and Nicolas Leroy. 2007. Wireless sensor interface and gesture-follower for music pedagogy. In Proceedings of the International Conference on New interfaces for Musical Expression (NIME'07). New York, NY, 124--129. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Frederic Bevilacqua, Norbert Schnell, Nicolas Rasamimanana, Bruno Zamborlin, and Fabrice Guedy. 2010a. Online gesture analysis and control of audio processing. In Musical Robots and Interactive Multimodal Systems. Springer Tracts in Advanced Robotics.Google ScholarGoogle Scholar
  5. F. Bevilacqua, Bruno Zamborlin, Anthony Sypniewski, Norbert Schnell, F. Guédy, and Nicolas Rasamimanana. 2010b. Continuous realtime gesture following and recognition. In Proceedings of the 8th International Gesture Workshop. 73--84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Richard Bowden, David Windridge, Timor Kadir, Andrew Zisserman, and Michael Brady. 2004. A Linguistic Feature Vector for the Visual Interpretation of Sign Language. Springer-Verlag, 391--401.Google ScholarGoogle Scholar
  7. Richard Bowden, Andrew Zisserman, Timor Kadir, and Mike Brady. 2003. Vision based Interpretation of Natural Sign Languages. In Proceedings of the 3rd International Conference on Computer Vision Systems. ACM, 391--401.Google ScholarGoogle Scholar
  8. Xiang Cao and Ravin Balakrishnan. 2003. VisionWand: Interaction techniques for large displays using a passive wand tracked in 3D. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (UIST'03). ACM, New York, NY, 173--182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Csikszentmihalyi. 2008. Flow: The Psychology of Optimal Experience. HarperCollins.Google ScholarGoogle Scholar
  10. Y. Dai. 2001. An associate memory model of facial expressions and its applications in facial expression recognition of patients on bed. In Proceedings of the IEEE International Conference Multimedia Expo. 772--775.Google ScholarGoogle ScholarCross RefCross Ref
  11. Jerry Fails and Dan Olsen. 2003a. A design tool for camera-based interaction. In Proceedings of the Conference on Human Factors in Computing Systems (CHI'03). 449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jerry Alan Fails and Dan R. Olsen Jr. 2003b. Interactive machine learning. In Proceedings of the 8th International Conference on Intelligent User Interfaces (IUI'03). ACM, New York, NY, 39--45. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Rebecca Fiebrink. 2010. Real-time interaction with supervised learning. In Proceedings of the 28th of the International Conference Extended Abstracts on Human Factors in Computing Systems (CHIEA'10). 2935. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rebecca Fiebrink, Perry R. Cook, and Daniel Trueman. 2011. Human model evaluation in interactive supervised learning. Machine Learning (2011), 147--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. François Guimbretière, Maureen Stone, and Terry Winograd. 2001. Fluid interaction with high-resolution wall-size displays. In Proceedings of the 14th Annual ACM Ssymposium on User Interface Software and Technology (UIST'01). ACM, New York, NY, 21--30. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R. Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'07). 145. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ken Hinckley, Gonzalo Ramos, Francois Guimbretiere, Patrick Baudisch, and Marc Smith. 2004. Stitching: Pen gestures that span multiple displays. In Proceedings of the International Working Conference on ACM Advanced Visual Interfaces (AVI'04). Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Caroline Hummels, Kees C. J. Overbeeke, and Sietske Klooster. 2006. Move to get moved: A search for methods, tools and knowledge to design for expressive and rich movement-based interaction. Personal and Ubiquitous Computing 11, 8 (Nov. 2006), 677--690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Amy K. Karlson and Benjamin B. Bederson. 2005. Applens and launchtile: Two designs for one-handed thumb use on small devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'05). ACM, 201--210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Sven Kratz and Rafael Ballagas. 2009. Unravelling seams: Improving mobile gesture recognition with visual feedback techniques. Techniques (2009), 937--940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Yang Li, Jaime Ruiz, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 197--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. Linjama, P. Korpipää, Juha Kela, and T. Rantakokko. 2008. ActionCube: A tangible mobile gesture interaction tutorial. In Proceedings of the 2nd International Conference on Tangible and Embedded Interaction. ACM, 169--172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Hao Lü and Yang Li. 2012. Gesture coder: A tool for programming multi-touch gestures by demonstration. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (CHI'12). ACM, New York, NY, 2875--2884. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. S. Mitra and T. Acharya. 2007. Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 37, 3 (2007), 311--324. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Meredith Ringel Morris, Anqi Huang, Andreas Paepcke, and Terry Winograd. 2006. Cooperative gestures: Multi-user gestural interactions for co-located groupware. In Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. ACM, 1201--1210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Otniel Portillo-Rodriguez, Oscar Osvaldo Sandoval-Gonzalez, Emanuele Ruffaldi, Rosario Leonardi, Carlo Alberto Avizzano, and Massimo Bergamasco. 2008. Real-time gesture recognition, evaluation and feed-forward correction of a multimodal tai-chi platform. In Proceedings of the 3rd International Workshop on Haptic and Audio Interaction Design (HAID'08). Springer, 30--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 77, 2 (1989), 257--286.Google ScholarGoogle ScholarCross RefCross Ref
  28. Norbert Schnell, Axel Röbel, Diemo Schwarz, Geoffroy Peeters, and Riccardo Borghesi. 2009. MuBu and friends: Assembling tools for content based real-time interactive audio processing in Max/MSP. In Proceedings of the International Computer Music Conference (ICMC'09). Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. P. Turaga, R. Chellappa, VS Subrahmanian, and O. Udrea. 2008. Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video Technology 18, 11 (2008), 1473--1488. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Y. Visell and J. Cooperstock. 2007. Enabling gestural interaction by means of tracking dynamical systems models and assistive feedback. In Proceedings of the IEEE International Conference oSystems, Man and Cybernetics (ISIC'07). 3373--3378.Google ScholarGoogle Scholar
  31. M. M. Wanderley and P. Depalle. 2004. Gestural control of sound synthesis. Proc. IEEE 92, 4 (April 2004), 632--644. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. David Wessel and Matthew Wright. 2002. Problems and prospects for intimate musical control of computers. Computer Music Journal 26, 3 (Sept. 2002), 11--22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Tracy Westeyn, Helene Brashear, Amin Atrash, and Thad Starner. 2003. Georgia Tech gesture toolkit: Supporting experiments in gesture recognition. In Proceedings of the 5th International Conference on Multimodal Interfaces. ACM, 85--92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. John Williamson. 2006. Continuous Uncertain Interaction. PhD dissertation, University of Glasgow, Scotland. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. A. D. Wilson and A. F. Bobick. 1999. Parametric Hidden Markov Models for gesture recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 21, 9 (1999), 884--900. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. I.H. Witten and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Vol. 54. Morgan Kaufmann. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI'09). 1083. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Matthew Wright, Adrian Freed, and Ali Momeni. 2003. OpenSound control: State of the art 2003. In Proceedings of the 2003 Conference on New Interfaces for Musical Expression (NIME'03). 153--160. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Fluid gesture interaction design: Applications of continuous recognition for the design of modern gestural interfaces

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Transactions on Interactive Intelligent Systems
      ACM Transactions on Interactive Intelligent Systems  Volume 3, Issue 4
      January 2014
      184 pages
      ISSN:2160-6455
      EISSN:2160-6463
      DOI:10.1145/2567808
      Issue’s Table of Contents

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 January 2014
      • Accepted: 1 June 2013
      • Revised: 1 September 2012
      • Received: 1 March 2012
      Published in tiis Volume 3, Issue 4

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader