skip to main content
10.1109/ICMI.2002.1167019acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

Flexi-Modal and Multi-Machine User Interfaces

Published:14 October 2002Publication History

ABSTRACT

We describe our system which facilitates collaboration using multiple modalities, including speech, handwriting, gestures, gaze tracking, direct manipulation, large projected touch-sensitive displays, laser pointer tracking, regular monitors with a mouse and keyboard, and wirelessly-networked handhelds. Our system allows multiple, geographically dispersed participants to simultaneously and flexibly mix different modalities using the right interface at the right time on one or more machines. This paper discusses each of the modalities provided, how they were integrated in the system architecture, and how the user interface enabled one or more people to flexibly use one or more devices.

References

  1. Abowd, G., "Classroom 2000: An Experiment with the Instrumentation of a Living Educational Environment." IBM Systems Journal, 1999. 38(4): pp. 508-530. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Bolt, R.A., ""Put-That- There": Voice and gesture at the graphics interface." Computer Graphics, 1980. 14(3): pp. 262-270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Cohen, P.R., et al. "QuickSet: Multimodal Interaction for Distributed Applications," in Proceedings of the 5th ACM International Conf. on Multimedia. 1997. Seattle: pp. 31-40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Denecke, M. "Object-Oriented Techniques in Grammar and Ontology Specification," in The Workshop on Multilingual Speech Communication. 2000. Kyoto, Japan: pp. 59-64.Google ScholarGoogle Scholar
  5. Finke, M., et al. "Modeling and Efficient Decoding of Large Vocabulary Conversational Speech," in Proceedings, Eurospeech- 99. 1999. Budapest: pp. 467-470.Google ScholarGoogle Scholar
  6. Finke, M., et al. "The JanusRTk Switchboard/Callhome 1997 Evaluation System," in The DARPA Large Vocabulary Conversational Speech Recognition Hub5e Workshop. 1997. BaltimoreGoogle ScholarGoogle Scholar
  7. Greenberg, S., Boyle, M., and Laberg, J., "PDAs and Shared Public Displays: Making Personal Information Public, and Public Information Personal." Personal Technologies, 1999. 3(1): pp. 54-64. March.Google ScholarGoogle Scholar
  8. Jaeger, S. "NPen++: An On-line Handwriting Recognition System," in 7th International Workshop on Frontiers in Handwriting Recognition. 2000. Amsterdam: pp. 249-260.Google ScholarGoogle Scholar
  9. McGee, D.R., et al. "Comparing Paper and Tangible, Multimodal Tools," in ACM CHI'2002 Conference Proceedings: Human Factors in Computing Systems. 2002. Minn, MN: pp. 407-414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Myers, B.A., "Using Hand-Held Devices and PCs Together." Comm. of the ACM, 2001. 44(11): pp. 34-41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Myers, B.A., et al. "Interacting At a Distance: Measuring the Performance of Laser Pointers and Other Devices," in ACM CHI'2002 Conference Proceedings: Human Factors in Computing Systems. 2002. Minn, MN: pp. 33-40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Myers, B.A., et al. "Interacting At a Distance Using Semantic Snarfing," in ACM UbiComp'2001. 2001. Atlanta, Georgia: pp. 305-314. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Oh, A., et al. "Evaluating Look-to-Talk: A Gaze-Aware Interface in a Collaborative Environment," in Extended Abstracts for CHI'2002: Human Factors in Computing Systems. 2002. Minneapolis, MN: pp. 650-651. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Rekimoto, J. "A Multiple Device Approach for Supporting Whiteboard-based Interactions," in SIGCHI'98: Human Factors in Computing Systems. 1998. Los Angeles, CA: pp. 344-351. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Rekimoto, J. and Saitoh, M. "Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments," in SIGCHI'99: Human Factors in Computing Systems. 1999. Pittsburgh, PA: pp. 378-385. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Waibel, A., et al. "Advances in Automatic Meeting Record Creation and Access," in International Conf. on Acoustics, Speech, and Signal Processing. 2001. Salt Lake City, Utah.Google ScholarGoogle Scholar
  17. Waibel, A., et al. "Advances in Meeting Recognition," in Human Language Technologies Conf. 2001. San Diego Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Winograd, T. and Guimbretiere, F. "Visual Instruments for an Interactive Mural," in ACM SIGCHI CHI99 Extended Abstracts. 1999. Pittsburgh, PA: pp. 234-235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  1. Flexi-Modal and Multi-Machine User Interfaces

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader