skip to main content
10.1145/1088463.1088499acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
Article

Synthetic characters as multichannel interfaces

Published: 04 October 2005 Publication History

Abstract

Synthetic characters are an effective modality to convey messages to the user, provide visual feedback about the system internal understanding of the communication, and engage the user in the dialogue through emotional involvement. In this paper we argue for a fine-grain distinction of the expressive capabilities of synthetic agents: avatars should not be considered as an indivisible modality but as the synergic contribution of different communication channels that, properly synchronized, generate an overall communication performance. In this view, we propose SMIL-AGENT as a representation and scripting language for synthetic characters, which abstracts away from the specific implementation and context of use of the character. SMIL-AGENT has been defined starting from SMIL 0.1 standard specification and aims at providing a high-level standardized language for presentations by different synthetic agents within diverse communication and application contexts.

References

[1]
Arafa, Y., Kamyab, K. and Mamdani, E. Toward a Unified Scripting Language: Lessons Learned from Developing CML and AML. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Springer-Verlag, 2004, 39--63.
[2]
Balci, K. Xface: MPEG-4 based open source toolkit for 3d facial animation. In Proceedings of AVI04, Working Conference on Advanced Visual Interfaces (Gallipoli, Italy, 25-28 May, 2004).
[3]
Brusilovsky, P. (2001) Adaptive Hypermedia. User Modeling and User Adapted Interaction, Ten Year Anniversary Issue (Alfred Kobsa, ed.) 11 (1/2), 87--110.
[4]
De Bra, P. and Stash, N. Multimedia Adaptation Using AHA! In Proceedings of the ED-MEDIA 2004 Conference (Lugano, Switzerland, June, 2004), 563--570.
[5]
De Carolis, B., Carofiglio, V., Bilvi, M. and Pelachaud, C. APML, a Markup Language for Believable Behavior Generation. In Proceedings of the Workshop on "Embodied conversational agents - let's specify and evaluate them!" (held in conjunction with AAMAS02, Bologna, Italy, 2002).
[6]
Mann, W. and Thompson, S. Rhetorical Structure Theory: A Theory of Text Organization. Technical Report, USC/Information Sciences Institute, Marina del Rey, CA, ISI/RS-87-190, June, 1987.
[7]
Marriott, A. and Beard, S., gUI: Specifying Complete User Interaction. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Springer-Verlag, 2004, 111--134.
[8]
Marriott, A. and Stallo, J. VHML- Uncertainties and Problems. A discussion. In Proceedings of the Workshop on "Embodied conversational agents - let's specify and evaluate them!" (held in conjunction with AAMAS02, Bologna, Italy, 2002).
[9]
Pelachaud, C., Carofiglio, V., De Carolis, B., de Rosis, F. and Poggi I. Embodied Contextual Agent in Information Delivering Application. In Proceedings of First International Joint Conference on Autonomous Agents & Multiagent Systems, AAMAS02 (Bologna, Italy, 15-19 July, 2002).
[10]
Pianesi, F. and Zancanaro, M. La specificazione del comportamento di agenti conversazionali - problemi per un modello a delega ed una proposta alternativa. In Proceedings of the Italy's Association of Cognitive Sciences (AISC) Second National Conference (Ivrea, Italy, 19-2- March, 2004).
[11]
Poggi, I., Pelachaud, C., de Rosis, F., Carofiglio, V. and De Carolis, B. Greta. A Believable Embodied Conversational Agent. In O. Stock and M. Zancanaro (eds.). Multimodal Intelligent Information Presentation, Springer, 2005, 3--25.
[12]
Prendinger, H. and Ishizuka, M. (eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Springer-Verlag, 2004.
[13]
Prendinger, H., Descamps, S. and Mitsuru, I. MPML: a markup language for controlling the behavior of life-like characters. In Journal of Visual Languages and Computing, 15, 2004, 183--203.
[14]
Pyun, H., Chae, W., Kim, Y., Kang, H. and Shin, S. Y. An example-based approach to textdriven speech animation with emotional expressions. Technical Report 200, KAIST, July 2004.
[15]
Saeyor, S. and Ishizuka, M. MPML and SCREAM: Scripting the Bodies and Minds of Life-Like Characters. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Springer-Verlag, 2004, 213--242.
[16]
Rist, T. Some Issues in the Design of Character Scripting and Specification Languages - a Personal View. In H. Prendinger and M. Ishizuka (eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Springer-Verlag, 2004, 463--468.
[17]
Rocchi C. and Zancanaro M. Template-based Adaptive Video Documentaries. In Proceedings of AIMS2004 Workshop, held in conjunction with UbiComp 2004. Nottingham,September, 2004.
[18]
Synchronized Multichannel Integration Language for Synthetic Agents (SMIL-AGENT) 0.1 Specification, ITC-irst -Technical report, T05-05-07, May 2005.
[19]
Synchronized Multimedia Integration Language (SMIL) 1.0 Specification. http://www.w3.org/TR/REC-smil/ (accessed on 21st of April 2005).
[20]
Sutton, V., SignWriting Site. www.signwriting.org (accessed on 29th of April 2005).
[21]
Streiter, O. and Vettori, C. (eds) Proceedings of the Workshop on the Representation and Processing of Sign Languages. Held in conjunction with LREC 2004, (Lisbon, Portugal, 30 May, 2004).
[22]
Speech Synthesis Markup Language (SSML) Version 1.0. http://www.w3.org/TR/2004/REC-speech-synthesis-20040907/ (accessed on 27th of April 2005).

Cited By

View all
  • (2024)Exploratory Visual Digital Character and Visual Digital Scene Design Using Artmaking Generative AIMaking Art With Generative AI Tools10.4018/979-8-3693-1950-5.ch006(115-129)Online publication date: 19-Apr-2024
  • (2010)An extensible framework for interactive facial animation with facial expressions, lip synchronization and eye behaviorComputers in Entertainment10.1145/1658866.16588777:4(1-20)Online publication date: 1-Jan-2010
  • (2007)Modeling of layered fuzzy facial expression generationProceedings of the 1st international conference on Digital human modeling10.5555/1784074.1784104(243-252)Online publication date: 22-Jul-2007
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '05: Proceedings of the 7th international conference on Multimodal interfaces
October 2005
344 pages
ISBN:1595930280
DOI:10.1145/1088463
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 October 2005

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. SMIL
  2. multimodal presentations
  3. synthetic characters

Qualifiers

  • Article

Conference

ICMI05
Sponsor:

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)0
Reflects downloads up to 09 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Exploratory Visual Digital Character and Visual Digital Scene Design Using Artmaking Generative AIMaking Art With Generative AI Tools10.4018/979-8-3693-1950-5.ch006(115-129)Online publication date: 19-Apr-2024
  • (2010)An extensible framework for interactive facial animation with facial expressions, lip synchronization and eye behaviorComputers in Entertainment10.1145/1658866.16588777:4(1-20)Online publication date: 1-Jan-2010
  • (2007)Modeling of layered fuzzy facial expression generationProceedings of the 1st international conference on Digital human modeling10.5555/1784074.1784104(243-252)Online publication date: 22-Jul-2007
  • (2007)ECAFProceedings of the 10th international conference on Text, speech and dialogue10.5555/1776334.1776364(206-213)Online publication date: 3-Sep-2007
  • (2007)Xface open source project and smil-agent scripting language for creating and animating embodied conversational agentsProceedings of the 15th ACM international conference on Multimedia10.1145/1291233.1291453(1013-1016)Online publication date: 29-Sep-2007
  • (2007)A Multimodal 3D Healthcare Communication System2007 3DTV Conference10.1109/3DTV.2007.4379488(1-4)Online publication date: May-2007
  • (2007)ECAF: Authoring Language for Embodied Conversational AgentsText, Speech and Dialogue10.1007/978-3-540-74628-7_28(206-213)Online publication date: 2007
  • (2007)Modeling of Layered Fuzzy Facial Expression GenerationDigital Human Modeling10.1007/978-3-540-73321-8_29(243-252)Online publication date: 2007
  • (2006)XfaceEd: MPEG-4 Based 3D Facial Animation Tool2006 IEEE 14th Signal Processing and Communications Applications10.1109/SIU.2006.1659913(1-4)Online publication date: 2006
  • (2005)XfaceEdProceedings of the 7th international conference on Multimodal interfaces10.1145/1088463.1088500(208-213)Online publication date: 4-Oct-2005

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media