ABSTRACT
We describe our system which facilitates collaboration using multiple modalities, including speech, handwriting, gestures, gaze tracking, direct manipulation, large projected touch-sensitive displays, laser pointer tracking, regular monitors with a mouse and keyboard, and wirelessly-networked handhelds. Our system allows multiple, geographically dispersed participants to simultaneously and flexibly mix different modalities using the right interface at the right time on one or more machines. This paper discusses each of the modalities provided, how they were integrated in the system architecture, and how the user interface enabled one or more people to flexibly use one or more devices.
- Abowd, G., "Classroom 2000: An Experiment with the Instrumentation of a Living Educational Environment." IBM Systems Journal, 1999. 38(4): pp. 508-530. Google ScholarDigital Library
- Bolt, R.A., ""Put-That- There": Voice and gesture at the graphics interface." Computer Graphics, 1980. 14(3): pp. 262-270. Google ScholarDigital Library
- Cohen, P.R., et al. "QuickSet: Multimodal Interaction for Distributed Applications," in Proceedings of the 5th ACM International Conf. on Multimedia. 1997. Seattle: pp. 31-40. Google ScholarDigital Library
- Denecke, M. "Object-Oriented Techniques in Grammar and Ontology Specification," in The Workshop on Multilingual Speech Communication. 2000. Kyoto, Japan: pp. 59-64.Google Scholar
- Finke, M., et al. "Modeling and Efficient Decoding of Large Vocabulary Conversational Speech," in Proceedings, Eurospeech- 99. 1999. Budapest: pp. 467-470.Google Scholar
- Finke, M., et al. "The JanusRTk Switchboard/Callhome 1997 Evaluation System," in The DARPA Large Vocabulary Conversational Speech Recognition Hub5e Workshop. 1997. BaltimoreGoogle Scholar
- Greenberg, S., Boyle, M., and Laberg, J., "PDAs and Shared Public Displays: Making Personal Information Public, and Public Information Personal." Personal Technologies, 1999. 3(1): pp. 54-64. March.Google Scholar
- Jaeger, S. "NPen++: An On-line Handwriting Recognition System," in 7th International Workshop on Frontiers in Handwriting Recognition. 2000. Amsterdam: pp. 249-260.Google Scholar
- McGee, D.R., et al. "Comparing Paper and Tangible, Multimodal Tools," in ACM CHI'2002 Conference Proceedings: Human Factors in Computing Systems. 2002. Minn, MN: pp. 407-414. Google ScholarDigital Library
- Myers, B.A., "Using Hand-Held Devices and PCs Together." Comm. of the ACM, 2001. 44(11): pp. 34-41. Google ScholarDigital Library
- Myers, B.A., et al. "Interacting At a Distance: Measuring the Performance of Laser Pointers and Other Devices," in ACM CHI'2002 Conference Proceedings: Human Factors in Computing Systems. 2002. Minn, MN: pp. 33-40. Google ScholarDigital Library
- Myers, B.A., et al. "Interacting At a Distance Using Semantic Snarfing," in ACM UbiComp'2001. 2001. Atlanta, Georgia: pp. 305-314. Google ScholarDigital Library
- Oh, A., et al. "Evaluating Look-to-Talk: A Gaze-Aware Interface in a Collaborative Environment," in Extended Abstracts for CHI'2002: Human Factors in Computing Systems. 2002. Minneapolis, MN: pp. 650-651. Google ScholarDigital Library
- Rekimoto, J. "A Multiple Device Approach for Supporting Whiteboard-based Interactions," in SIGCHI'98: Human Factors in Computing Systems. 1998. Los Angeles, CA: pp. 344-351. Google ScholarDigital Library
- Rekimoto, J. and Saitoh, M. "Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments," in SIGCHI'99: Human Factors in Computing Systems. 1999. Pittsburgh, PA: pp. 378-385. Google ScholarDigital Library
- Waibel, A., et al. "Advances in Automatic Meeting Record Creation and Access," in International Conf. on Acoustics, Speech, and Signal Processing. 2001. Salt Lake City, Utah.Google Scholar
- Waibel, A., et al. "Advances in Meeting Recognition," in Human Language Technologies Conf. 2001. San Diego Google ScholarDigital Library
- Winograd, T. and Guimbretiere, F. "Visual Instruments for an Interactive Mural," in ACM SIGCHI CHI99 Extended Abstracts. 1999. Pittsburgh, PA: pp. 234-235. Google ScholarDigital Library
- Flexi-Modal and Multi-Machine User Interfaces
Recommendations
Requirements for Automatically Generating Multi-Modal Interfaces for Complex Appliances
ICMI '02: Proceedings of the 4th IEEE International Conference on Multimodal InterfacesSeveral industrial and academic research groups are working to simplify the control of appliances and services by creating a truly universal remote control. Unlike the preprogrammed remote controls available today, these new controllers download a ...
Multimodal error correction for continuous handwriting recognition in pen-based user interfaces
IUI '06: Proceedings of the 11th international conference on Intelligent user interfacesIn this paper, we describe a multimodal error correction mechanism. It allows the user to correct errors in continuous handwriting recognition naturally by simultaneously using pen gesture and speech. A multimodal fusion algorithm is designed to enhance ...
Camera phone based motion sensing: interaction techniques, applications and performance study
UIST '06: Proceedings of the 19th annual ACM symposium on User interface software and technologyThis paper presents TinyMotion, a pure software approach for detecting a mobile phone user's hand movement in real time by analyzing image sequences captured by the built-in camera. We present the design and implementation of TinyMotion and several ...
Comments