skip to main content
10.1145/1111449.1111481acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
Article

A conceptual framework for developing adaptive multimodal applications

Published: 29 January 2006 Publication History

Abstract

This article presents FAME, a model-based Framework for Adaptive Multimodal Environments. FAME proposes an architecture for adaptive multimodal applications, a new way to represent adaptation rules - the behavioral matrix - and a set of guidelines to assist the design process of adaptive multimodal applications. To demonstrate FAME's validity, the development process of an adaptive Digital Talking Book player is summarized.

References

[1]
E. Blechschmitt and C. Strödecke. An architecture to provide adaptive, synchronized and multimodal human computer interaction. In MULTIMEDIA '02: Proceedings of the tenth ACM international conference on Multimedia, pages 287--290, New York, NY, USA, 2002. ACM Press.
[2]
J. Bouchet, L. Nigay, and T. Ganille. Icare software components for rapidly developing multimodal interfaces. In ICMI '04: Proceedings of the 6th international conference on Multimodal interfaces, pages 251--258, New York, NY, USA, 2004. ACM Press.
[3]
G. Calvary, J. Coutaz, D. Thevenin, Q. Limbourg, N. Souchon, L. Bouillon, M. Florins, and J. Vanderdonckt. Plasticity of user interfaces: A revised reference framework. In Proceedings of the First International Workshop on Task Models and Diagrams for User Interface Design TAMODIA'2002, pages 127--134, Bucharest, Romania, 2002.
[4]
P. De Bra, G.-J. Houben, and H. Wu. AHAM: a dexter-based reference model for adaptive hypermedia. In HYPERTEXT '99: Proceedings of the tenth ACM Conference on Hypertext and hypermedia : returning to our diverse roots, pages 147--156, New York, NY, USA, 1999. ACM Press.
[5]
P. Dragicevic and J.-D. Fekete. The input configurator toolkit: towards high input adaptability in interactive applications. In AVI '04: Proceedings of the working conference on Advanced visual interfaces, pages 244--247, New York, NY, USA, 2004. ACM Press.
[6]
C. Elting, S. Rapp, G. Möhler, and M. Strube. Architecture and implementation of multimodal plug and play. In ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces, pages 93--100, New York, NY, USA, 2003. ACM Press.
[7]
F. Flippo, A. Krebs, and I. Marsic. A framework for rapid development of multimodal interfaces. In ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces, pages 109--116, New York, NY, USA, 2003. ACM Press.
[8]
A. Garg, V. PavloviĆ, and J. Rehg. Boosted learning in dynamic bayesian networks for multimodal speaker detection. Proceedings of the IEEE, 91(9):1355--1369, 2003.
[9]
D. Gotz and K. Mayer-Patel. A general framework for multidimensional adaptation. In MULTIMEDIA '04: Proceedings of the 12th annual ACM international conference on Multimedia, pages 612--619, New York, NY, USA, 2004. ACM Press.
[10]
M. Harders and G. Székely. Enhancing human-computer interaction in medical segmentation. Proceedings of the IEEE, 91(9):1430--1442, 2003.
[11]
C. Jacobs, W. Li, E. Schrier, D. Bargeron, and D. Salesin. Adaptive grid-based document layout. ACM Trans. Graph., 22(3):838--847, 2003.
[12]
A. Kobsa. Generic user modeling systems. User Modeling and User-Adapted Interaction, 11(1-2):49--63, 2001.
[13]
S. Oviatt. Mutual disambiguation of recognition errors in a multimodel architecture. In CHI '99: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 576--583, New York, NY, USA, 1999. ACM Press.
[14]
S. Oviatt. User-centered modeling and evaluation of multimodal interfaces. Proceedings of the IEEE, 91(9):1457--1468, 2003.
[15]
S. Oviatt, R. Coulston, and R. Lunsford. When do we interact multimodally?: cognitive load and multimodal communication patterns. In ICMI '04: Proceedings of the 6th international conference on Multimodal interfaces, pages 129--136, New York, NY, USA, 2004. ACM Press.
[16]
S. Oviatt, T. Darrell, and M. Flickner. Multimodal interfaces that flex, adapt, and persist. Commun. ACM, 47(1), 2004.
[17]
S. L. Oviatt, P. R. Cohen, L. Wu, J. Vergo, L. Duncan, B. Suhm, J. Bers, T. Holzman, T. Winograd, J. Landay, J. Larson, and D. Ferro. Designing the user interface for multimodal speech and gesture applications: State-of-the-art systems and research directions. Human Computer Interaction, 15(4):263--322, 2000.
[18]
F. Paternò. Model-Based Design and Evaluation of Interactive Applications. Springer-Verlag, 1999.
[19]
R. Sharma, M. Yeasin, N. Krahnstoever, I. Rauschert, G. Cai, I. Brewer, A. M. Maceachren, and K. Sengupta. Speech-gesture driven multimodal interfaces for crisis management. Proceedings of the IEEE, 91(9):1327--1354, 2003.
[20]
C. Stephanidis and A. Savidis. Universal access in the information society: Methods, tools and interaction technologies. Universal Access in the Information Society, 1(1):40--55, 2001.
[21]
C. Wickens and J. Hollands. Engineering Psychology and Human Performance. Prentice Hall, 1999.

Cited By

View all
  • (2024)Gaze-dependent response activation in dialogue agent for cognitive-behavioral therapyProcedia Computer Science10.1016/j.procs.2024.09.554246(2322-2331)Online publication date: 2024
  • (2024)A Multimodal Approach for Improving a Dialogue Agent for Therapeutic Sessions in PsychiatryTransforming Media Accessibility in Europe10.1007/978-3-031-60049-4_22(397-414)Online publication date: 20-Aug-2024
  • (2022)Does Using Voice Authentication in Multimodal Systems Correlate With Increased Speech Interaction During Non-critical Routine Tasks?Proceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511129(868-877)Online publication date: 22-Mar-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '06: Proceedings of the 11th international conference on Intelligent user interfaces
January 2006
392 pages
ISBN:1595932879
DOI:10.1145/1111449
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 January 2006

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. IUI design
  2. adaptive multimodal interfaces
  3. behavioral matrix
  4. digital talking books

Qualifiers

  • Article

Conference

IUI06
IUI06: 11th International Conference on Intelligent User Interfaces
January 29 - February 1, 2006
Sydney, Australia

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)2
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Gaze-dependent response activation in dialogue agent for cognitive-behavioral therapyProcedia Computer Science10.1016/j.procs.2024.09.554246(2322-2331)Online publication date: 2024
  • (2024)A Multimodal Approach for Improving a Dialogue Agent for Therapeutic Sessions in PsychiatryTransforming Media Accessibility in Europe10.1007/978-3-031-60049-4_22(397-414)Online publication date: 20-Aug-2024
  • (2022)Does Using Voice Authentication in Multimodal Systems Correlate With Increased Speech Interaction During Non-critical Routine Tasks?Proceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511129(868-877)Online publication date: 22-Mar-2022
  • (2021)Adaptive user interfaces and universal usability through plasticity of user interface designComputer Science Review10.1016/j.cosrev.2021.10036340:COnline publication date: 1-May-2021
  • (2021)A mechanism for blind-friendly user interface adaptation of mobile apps: a case study for improving the user experience of the blind peopleJournal of Ambient Intelligence and Humanized Computing10.1007/s12652-021-03393-513:5(2841-2871)Online publication date: 23-Jul-2021
  • (2019)Development process for intelligent user interfacesProceedings of the XVIII Brazilian Symposium on Software Quality10.1145/3364641.3364665(210-215)Online publication date: 28-Oct-2019
  • (2018)The HMI digital ecosystemProceedings of the 10th International Conference on Management of Digital EcoSystems10.1145/3281375.3281397(157-164)Online publication date: 25-Sep-2018
  • (2017)Self-adaptive unobtrusive interactions of mobile computing systemsJournal of Ambient Intelligence and Smart Environments10.3233/AIS-1704639:6(659-688)Online publication date: 2-Nov-2017
  • (2017)Adapt-UIProceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems10.1145/3102113.3102144(99-104)Online publication date: 26-Jun-2017
  • (2017)Adaptive multimodal interaction in mobile augmented reality: A conceptual framework10.1063/1.5005483(020150)Online publication date: 2017
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media