skip to main content
research-article

Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input

Published:02 April 2019Publication History
Skip Abstract Section

Abstract

In gesture recognition, one challenge that researchers and developers face is the need for recognition strategies that mediate between false positives and false negatives. In this article, we examine bi-level thresholding, a recognition strategy that uses two thresholds: a tighter threshold limits false positives and recognition errors, and a looser threshold prevents repeated errors (false negatives) by analyzing movements in sequence. We first describe early observations that led to the development of the bi-level thresholding algorithm. Next, using a Wizard-of-Oz recognizer, we hold recognition rates constant and adjust for fixed versus bi-level thresholding; we show that systems using bi-level thresholding result in significantly lower workload scores on the NASA-TLX and significantly lower accelerometer variance when performing gesture input. Finally, we examine the effect that bi-level thresholding has on a real-world dataset of wrist and finger gestures, showing an ability to significantly improve measures of precision and recall. Overall, these results argue for the viability of bi-level thresholding as an effective technique for balancing between false positives, recognition errors, and false negatives.

References

  1. 2010. Smart Profile. Retrieved on June 2017 from https://mobilesupport.lenovo.com/us/en/solution/MS51540.Google ScholarGoogle Scholar
  2. Johnny Accot and Shumin Zhai. 2002. More than dotting the I’s—Foundations for crossing-based interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’02). ACM, New York, 73--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Daniel Ashbrook and Thad Starner. 2010. MAGIC: A motion gesture design tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, 2159--2168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Joel F. Bartlett. 2000. Rock ‘N’ scroll is here to stay. IEEE Computer Graphics and Applications 20, 3 (May 2000), 40--45. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Roman Bertolami, Matthias Zimmermann, and Horst Bunke. 2006. Rejection strategies for offline handwritten text line recognition. Pattern Recognition Letters 27, 16 (Dec. 2006), 2005--2012.Google ScholarGoogle ScholarCross RefCross Ref
  6. Richard Bolt. 1980. “Put-that-there”: Voice and gesture at the graphics interface. In Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH’80). 262--270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Timothy T. Brewer, Daniel S. Hoeger, Lora K. McCambridge, Teresa L. Kelsey, Anthony R. Claflin, Kenneth R. Robertson, and Michael W. Van Flandern. 1997. Method and system for activating double click applications with a single click. https://www.google.com/patents/US5611040Google ScholarGoogle Scholar
  8. Juliet Corbin, Anselm Strauss, and Anselm L Strauss. 2014. Basics of Qualitative Research. Sage.Google ScholarGoogle Scholar
  9. Nils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of Oz studies—Why and how. Knowledge-based Systems 6, 4 (1993), 258--266. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Liwei Dai, Andrew Sears, and Rich Goldman. 2009. Shifting the focus from accuracy to recallability: A study of informal note-taking on mobile information technologies. ACM Transactions on Computer-Human Interaction 16, 1 (April 2009), 4:1--4:46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Artem Dementyev and Joseph A. Paradiso. 2014. WristFlex: Low-power gesture input with wrist-worn pressure sensors. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST’14). ACM, New York, 161--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Richard O. Duda, Peter E. Hart, and David G. Stork. 2012. Pattern Classification. John Wiley 8 Sons.Google ScholarGoogle Scholar
  13. Tom Fawcett. 2006. An introduction to ROC analysis. Pattern Recognition Letters 27, 8 (June 2006), 861--874. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Ferrone, X. Jiang, L. Maiolo, A. Pecora, L. Colace, and C. Menon. 2016. A fabric-based wearable band for hand gesture recognition based on filament strain sensors: A preliminary investigation. In 2016 IEEE Healthcare Innovation Point-Of-Care Technologies Conference (HI-POCT’16). 113--116.Google ScholarGoogle Scholar
  15. Tamar Flash and Neville Hogans. 1985. The coordination of arm movements: An experimentally confirmed mathematical model. Journal of Neuroscience 5 (1985), 1688--1703.Google ScholarGoogle ScholarCross RefCross Ref
  16. Rui Fukui, Masahiko Watanabe, Tomoaki Gyota, Masamichi Shimosaka, and Tomomasa Sato. 2011. Hand shape classification with a wrist contour sensor: Development of a prototype device. In Proceedings of the 13th International Conference on Ubiquitous Computing (UbiComp’11). ACM, New York, 311--314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Faizan Haque, Mathieu Nancel, and Daniel Vogel. 2015. Myopoint: Pointing and clicking using forearm mounted electromyography and inertial motion sensors. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’15). ACM, 3653--3656. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: Appropriating the body as an input surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, 453--462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. G. Sandra Hart and E. Lowell Staveland. 1988. Development of NASA-TLX (task load index): Results of empirical and theoretical research. Human Mental Workload 52 (1988), 139--183.Google ScholarGoogle ScholarCross RefCross Ref
  20. Björn Hartmann, Leith Abdulla, Manas Mittal, and Scott R. Klemmer. 2007. Authoring sensor-based interactions by demonstration with direct manipulation and pattern recognition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’07). ACM, 145--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Trevor Hastie, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning (2nd ed.). Springer-Verlag, New York.Google ScholarGoogle Scholar
  22. Timothy J. Hazen, Stephanie Seneff, and Joseph Polifroni. 2002. Recognition confidence scoring and its use in speech understanding systems. Computer Speech 8 Language 16, 1 (Jan. 2002), 49--67. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Luke Hespanhol, Martin Tomitsch, Kazjon Grace, Anthony Collins, and Judy Kay. 2012. Investigating intuitiveness and effectiveness of gestures for free spatial interaction with large displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (PerDis’12). ACM, New York, 6:1--6:6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ken Hinckley, Jeff Pierce, Mike Sinclair, and Eric Horvitz. 2000. Sensing techniques for mobile interaction. In Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology (UIST’00). ACM, 91--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Eleanor Jones, Jason Alexander, Andreas Andreou, Pourang Irani, and Sriram Subramanian. 2010. GesText: Accelerometer-based gestural text-entry systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, 2173--2182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Pyeong-Gook Jung, Gukchan Lim, Seonghyok Kim, and Kyoungchul Kong. 2015. A wearable gesture recognition device for detecting muscular activities based on air-pressure sensors. IEEE Transactions on Industrial Informatics 11, 2 (April 2015), 485--494.Google ScholarGoogle Scholar
  27. Ankit Kamal, Yang Li, and Edward Lank. 2014. Teaching motion gestures via recognizer feedback. In Proceedings of the 19th International Conference on Intelligent User Interfaces (IUI’14). ACM, 73--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Keiko Katsuragawa, Ankit Kamal, and Edward Lank. 2017. Effect of motion-gesture recognizer error pattern on user workload and behavior. In Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI’17). ACM, New York, 439--449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Keiko Katsuragawa, Krzysztof Pietroszek, James R. Wallace, and Edward Lank. 2016. Watchpoint: Freehand pointing with a smartwatch in a ubiquitous display environment. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’16). ACM, 128--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Keiko Katsuragawa, James R. Wallace, and Edward Lank. 2016. Gestural text input using a smartwatch. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’16). ACM, 220--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. David Kim, Otmar Hilliges, Shahram Izadi, Alex D. Butler, Jiawen Chen, Iason Oikonomidis, and Patrick Olivier. 2012. Digits: Freehand 3D interactions anywhere using a wrist-worn gloveless sensor. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology (UIST’12). ACM, 167--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. L. Koerich. 2004. Rejection strategies for handwritten word recognition. In Proceedings of the 9th International Workshop on Frontiers in Handwriting Recognition. 479--484. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Gierad Laput, Robert Xiao, and Chris Harrison. 2016. ViBand: High-fidelity bio-acoustic sensing using commodity smartwatch accelerometers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST’16). ACM, New York, 321--333. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Frank Chun Yat Li, David Dearman, and Khai N. Truong. 2009. Virtual shelves: Interactions with orientation aware devices. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST’09). ACM, 125--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Yang Li. 2010. Protractor: A fast and accurate gesture recognizer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, 2169--2172. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Jiayang Liu, Lin Zhong, Jehan Wickramasuriya, and Venu Vasudevan. 2009. User evaluation of lightweight user authentication with a single tri-axis accelerometer. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’09). ACM, 15:1--15:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou. 2014. A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices. IEEE Transactions on Human-Machine Systems 44, 2 (April 2014), 293--299.Google ScholarGoogle ScholarCross RefCross Ref
  38. Giulio Marin, Fabio Dominio, and Pietro Zanuttigh. 2014. Hand gesture recognition with leap motion and kinect devices. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP’14). 1565--1569.Google ScholarGoogle ScholarCross RefCross Ref
  39. Jess McIntosh, Charlie McNeill, Mike Fraser, Frederic Kerber, Markus Löchtefeld, and Antonio Krüger. 2016. EMPress: Practical hand gesture classification with wrist-mounted EMG and pressure sensing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, 2332--2342. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Christophe Mignot, Claude Valot, and Noëlle Carbonell. 1993. An experimental study of future “Natural” Multimodal human-computer interaction. In Proceedings of the INTERACT’93 and CHI’93 Conference Companion on Human Factors in Computing Systems (CHI’93). ACM, 67--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. George Nagy. 1982. 29 optical character recognition -- theory and practice. Handbook of Statistics 2 (1982), 621--649.Google ScholarGoogle ScholarCross RefCross Ref
  42. Matei Negulescu, Jaime Ruiz, and Edward Lank. 2012. A recognition safety net: Bi-level threshold recognition for mobile motion gestures. In Proceedings of the 14th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’12). ACM, 147--150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Matei Negulescu, Jaime Ruiz, Yang Li, and Edward Lank. 2012. Tap, swipe, or move: Attentional demands for distracted smartphone input. In Proceedings of the International Working Conference on Advanced Visual Interfaces (AVI’12). ACM, 173--180. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Kurt Partridge, Saurav Chatterjee, Vibha Sazawal, Gaetano Borriello, and Roy Want. 2002. TiltType: Accelerometer-supported text entry for very small devices. In Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (UIST’02). ACM, 201--204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Jun Rekimoto. 1996. Tilting operations for small screen interfaces. In Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology (UIST’96). ACM, 167--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Zhou Ren, Jingjing Meng, Junsong Yuan, and Zhengyou Zhang. 2011. Robust hand gesture recognition with kinect sensor. In Proceedings of the 19th ACM International Conference on Multimedia (MM’11). ACM, New York, 759--760. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Gerhard Rigoll, Andreas Kosmala, and Stefan Eickeler. 1998. High performance real-time gesture recognition using hidden Markov models. In Proceedings of International Gesture Workshop. Springer, 69--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. C. J. Van Rijsbergen. 1979. Information Retrieval (2nd ed.). Butterworth-Heinemann, Newton, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Dean Rubine. 1991. Specifying gestures by example. Proceedings of Annual Conference on Computer Graphics and Interactive Techniques 25, 4 (July 1991), 329--337. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Jaime Ruiz and Yang Li. 2011. DoubleFlip: A motion gesture delimiter for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, 2717--2720. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Jaime Ruiz, Yang Li, and Edward Lank. 2011. User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). ACM, 197--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. T. Scott Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, and James A. Landay. 2009. Enabling always-available input with muscle-computer interfaces. In Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology (UIST’09). ACM, New York, 167--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. T. Scott Saponas, Desney S. Tan, Dan Morris, Jim Turner, and James A. Landay. 2010. Making muscle-computer interfaces more practical. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, New York, 851--854. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Tevfik Metin Sezgin and Randall Davis. 2005. HMM-based efficient sketch recognition. In Proceedings of the 10th International Conference on Intelligent User Interfaces (IUI’05). ACM, 281--283. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, and Andrew Blake. 2011. Real-time human pose recognition in parts from single depth images. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’11). IEEE Computer Society, 1297--1304. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. David Small and Hiroshi Ishii. 1997. Design of spatially aware graspable displays. In CHI’97 Extended Abstracts on Human Factors in Computing Systems (CHI EA’97). ACM, 367--368. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Seiichi Uchida and Hiroaki Sakoe. 2005. A survey of elastic matching techniques for handwritten character recognition. IEICE - Transactions on Information and Systems E88-D, 8 (August 2005), 1781--1790. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Radu-Daniel Vatavu and Ionut-Alexandru Zaiti. 2014. Leap gestures for TV: Insights from an elicitation study. In Proceedings of the 2014 ACM International Conference on Interactive Experiences for TV and Online Video (TVX’14). ACM, New York, 131--138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Taras K. Vintsyuk. 1968. Speech discrimination by dynamic programming. Kybernetika (1968).Google ScholarGoogle Scholar
  60. Paul Viola, John C. Platt, and Cha Zhang. 2005. Multiple instance boosting for object detection. In Proceedings of the 18th International Conference on Neural Information Processing Systems (NIPS’05). MIT Press, Cambridge, MA, 1417--1424. http://dl.acm.org/citation.cfm?id=2976248.2976426 Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Daniel Vogel and Ravin Balakrishnan. 2005. Distant freehand pointing and clicking on very large, high resolution displays. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology (UIST’05). ACM, 33--42. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Stephen Voida, Mark Podlaseck, Rick Kjeldsen, and Claudio Pinhanez. 2005. A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’05). ACM, 611--620. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. R. Wang, Hong-Jiang Zhang, and Ya-Qin Zhang. 2000. A confidence measure based moving object extraction system built for compressed domain. In Proceedings of the 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century (IEEE Cat No. 00CH36353), Vol. 5. 21--24.Google ScholarGoogle ScholarCross RefCross Ref
  64. Lars Weberg, Torbjörn Brange, and A. Wendelbo Hansson. 2001. A piece of butter on the PDA display. In CHI’01 Extended Abstracts on Human Factors in Computing Systems (CHI EA’01). ACM, 435--436. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Hongyi Wen, Julian Ramos Rojas, and Anind K. Dey. 2016. Serendipity: Finger gesture recognition using an off-the-shelf smartwatch. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’16). ACM, New York, 3847--3851. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Daniel Wigdor and Ravin Balakrishnan. 2003. TiltText: Using tilt for text input to mobile phones. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (UIST’03). ACM, 81--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Jay G. Wilpon, Lawrence R. Rabiner, Chin-Hui Lee, and E. R. Goldman. 1990. Automatic recognition of keywords in unconstrained speech using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing 38, 11 (11 1990), 1870--1878.Google ScholarGoogle ScholarCross RefCross Ref
  68. Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, 1083--1092. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology (UIST’07). ACM, 159--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Yeliz Yesilada, Robert Stevens, Simon Harper, and Carole Goble. 2007. Evaluating DANTE: Semantic transcoding for visually disabled users. ACM Transactions on Computer-Human Interaction 14, 3 (Sept. 2007). Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Yang Zhang and Chris Harrison. 2015. Tomo: Wearable, low-cost electrical impedance tomography for hand gesture recognition. In Proceedings of the 28th Annual ACM Symposium on User Interface Software and Technology (UIST’15). ACM, New York, 167--173. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Yang Zhang, Robert Xiao, and Chris Harrison. 2016. Advancing hand gesture recognition with high resolution electrical impedance tomography. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST’16). ACM, New York, 843--850. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Bi-Level Thresholding: Analyzing the Effect of Repeated Errors in Gesture Input

          Recommendations

          Reviews

          Dominik Strzalka

          Ensuring a proper, easy, and convenient interface between human and machine has been one of the most challenging issues since the first machines were developed. In the beginning, when simple machines responded to direct human hand interaction with different levers, things were rather easy. For example, let's consider pincers: human error, like using too little force or the wrong handle, did not usually lead to serious consequences. As machines become more and more complicated, and pressing one button can lead to a series of actions, this is not so obvious. The first simple human-computer interfaces based on keyboards and monitors guaranteed the possibility to correct what was wrongly written after the prompt sign, or when "enter" was pressed, to see the system response. A wrong command led to a syntax error or question presented on the screen: are you sure When graphical user interfaces (GUIs) became more popular and the mouse was used, our senses learned a new kind of interface; however, for many, it took some time to coordinate the hand and mouse pointer on the screen without looking at the hand controlling the mouse. How many wrong movements and wrong clicks were the result Next came the use of touch interfaces, especially in mobile devices. Note: many people are afraid of touching a "wrong" button. Now we are expected to use voice or gesture-based interfaces, but such sophisticated solutions are always related to issues of reliability and gesture recognition. Several challenges are still open, but the most important is: "How can one discriminate everyday movement from intentional movement " This leads to problems of false positives ("systems respond without the user intending to invoke a command"), false negatives (a system does not respond despite the right gesture command), and "recognizer accuracy ... expressed in terms of recall and precision." Such cases can lead to very costly actions. On the other hand, if a system is too sensitive, every movement can fire actions and thus lead to the "Midas touch effect." As explained in the paper, "the typical approach to balancing precision and recall [is based on] thresholds, which ensure that only input that is sufficiently close to an individual category is recognized." However, the authors propose a modified recognition strategy called bi-level thresholding, which looks at interaction as a sequence of user actions. If a gesture is slightly incorrect, the system examines another input. If this one is similar (a near miss) the gesture is recognized. Based on experiments with five gestures, around two-thirds of the attempts were recognized with the bi-level threshold model. The paper includes experimental data based on a bi-level thresholding state diagram (figure 8) with background related to a bi-level threshold recognizer (figure 1). Finally, the authors state: if a user's gesture is a near miss, such a situation also provides "valuable information which can be used to enhance the perceived reliability of recognition-based interactions."

          Access critical reviews of Computing literature here

          Become a reviewer for Computing Reviews.

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Interactive Intelligent Systems
            ACM Transactions on Interactive Intelligent Systems  Volume 9, Issue 2-3
            Special Issue on Highlights of ACM IUI 2017
            September 2019
            324 pages
            ISSN:2160-6455
            EISSN:2160-6463
            DOI:10.1145/3320251
            Issue’s Table of Contents

            Copyright © 2019 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 2 April 2019
            • Accepted: 1 January 2018
            • Revised: 1 October 2017
            • Received: 1 June 2017
            Published in tiis Volume 9, Issue 2-3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format