skip to main content
10.5555/1402383.1402409acmconferencesArticle/Chapter ViewAbstractPublication PagesaamasConference Proceedingsconference-collections
research-article

SmartBody: behavior realization for embodied conversational agents

Published: 12 May 2008 Publication History

Abstract

Researchers demand much from their embodied conversational agents (ECAs), requiring them to be both life-like, as well as responsive to events in an interactive setting. We find that a flexible combination of animation approaches may be needed to satisfy these needs. In this paper we present SmartBody, an open source modular framework for animating ECAs in real time, based on the notion of hierarchically connected animation controllers. Controllers in SmartBody can employ arbitrary animation algorithms such as keyframe interpolation, motion capture or procedural animation. Controllers can also schedule or combine other controllers. We discuss our architecture in detail, including how we incorporate traditional approaches, and develop the notion of a controller as a reactive module within a generic framework, for realizing modular animation control. To illustrate the versatility of the architecture, we also discuss a range of applications that have used SmartBody successfully.

References

[1]
Kallmann, M., Marsella, S.: Hierarchical motion controllers for real-time autonomous virtual humans. In: Proc. of Intelligent Virtual Agents (IVA'05), Kos, Greece (September 12-14 2005) 243--265
[2]
Kopp, S., Krenn, B., Marsella, S., Marshall, A., Pelachaud, C., Pirker, H., Thorisson, K., Vilhjalmsson, H.: Towards a common framework for multimodal generation: The behavior markup language. In: Proc. of Intelligent Virtual Agents (IVA'06). (2006) 105--111
[3]
Lee, J., Marsella, S.: Nonverbal behavior generator for embodied conversational agents. In: Proc. of Intelligent Virtual Agents (IVA'06), Marina del Rey, CA (August 21-23 2006) 243--255
[4]
P. Ekman, W. V. F.: InvestigatorÕs guide to the Facial Action Coding System. Consulting Psychologist Press (1978)
[5]
Ekman, P.: Emotion in the human face. Cambridge University Press (1982)
[6]
Arnaud, R., Barnes, M.: COLLADA: Sailing the Gulf of 3D Digital Content Creation. A K Peters, Ltd. (2006)
[7]
Swartout, W., Gratch, J., Hill, R., Hovy, E., Marsella, S., Rickel, J., Traum, D.: Toward virtual humans. AI Magazine 27(1) (2006)
[8]
Hill, R., Belanich, Core, M., Lane, Dixon, Forbell, E., Kim, J., Hart: Pedagogically structured game-based training: Development of the elect bilat simulation. In: Poster presentation at the 25th Army Science Conference, Orlando, FL. (Nov 2006)
[9]
Kenny, P., Parsons, T. D., Gratch, J., Rizzo, A.: Virtual patients for clinical therapist skills training (2007)
[10]
Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., van der Werf, R. J., Morency, L. P.: Virtual rapport. In: Proc. of Intelligent Virtual Agents (IVA'06), Marina del Rey, CA (August 21-23 2006)
[11]
Boulic, R., Magnenat-Thalmann, N., Thalmann, D.: A global human walking model with real-time kinematic personification. The Visual Computer 6(6) (1990)
[12]
Park, S. I., Shin, H. J., Shin, S. Y.: On-line locomotion generation based on motion blending. In: Proc. of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), New York, NY, USA, ACM Press (2002) 105--111
[13]
Kallmann, M.: Scalable solutions for interactive virtual humans that can manipulate objects. In: Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Marina del Rey, CA (June 1-3 2005)
[14]
Liu, Y., Badler, N. I.: Real-time reach planning for animated characters using hardware acceleration. In: Proc. of Computer Animation and Social Agents (CASA'03). (2003) 86--93
[15]
Kuffner, J. J., Latombe, J. C.: Interactive manipulation planning for animated characters. In: Proc. of Pacific Graphics'00, Hong Kong (Oct 2000) poster paper.
[16]
Baerlochcer, P.: Inverse Kinematics Techniques for the Interactive Posture Control of Articulated Figures. PhD thesis, Swiss Federal Institute of Technology, EPFL (2001) Thesis number 2383.
[17]
Tolani, D., Badler, N.: Real-time inverse kinematics of the human arm. Presence 5(4) (1996) 393--401
[18]
Gu, E., Badler, N.: Visual attention and eye gaze during multipartite conversations with distractions. In: Proc. of Intelligent Virtual Agents (IVA'06), Marina del Rey, CA (2006)
[19]
Bodenheimer, B., Rose, C., Rosenthal, S., Pella, J.: The process of motion capture: Dealing with the data. In Thalmann, D., van de Panne, M., eds.: Computer Animation and Simulation '97, Springer NY (Sept 1997) 3--18 Eurographics Animation Workshop.
[20]
Thoroughman, K. A., Shadmehr, R.: Learning of action through combination of motor primitives. Nature 407(6805) (2000) 742--747
[21]
Giszter, S. F., Mussa-Ivaldi, F. A., Bizzi, E.: Convergent force fields organized in the frogŠs spinal cord. Journal of Neuroscience 13(2) (1993) 467--491
[22]
Granieri, J. P., Crabtree, J., Badler, N. I.: Production and playback of human figure motion for visual simulation. ACM Transactions on Modeling and Computer Simulation 5(3) (1995) 222--241
[23]
Perlin, K., Goldberg, A.: Improv: A system for scripting interactive actors in virtual worlds. In: Proc. of SIGGRAPH 96, New Orleans, LA (1996) 205--216
[24]
Boulic, R., Bécheiraz, P., Emering, L., Thalmann, D.: Integration of motion control techniques for virtual human and avatar real-time animation. In: Proc. of Virtual Reality Software and Technology (VRST'97), Switzerland (Sept 1997) 111--118
[25]
Faloutsos, P., van de Panne, M., Terzopoulos, D.: Composable controllers for physics-based character animation. In: SIGGRAPH '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, New York, NY, USA, ACM Press (2001) 251--260
[26]
Kopp, S., Wachsmuth, I.: Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds 15(1) (2004) 39--52
[27]
Cassell, J., Vilhjálmsson, H. H., Bickmore, T. W.: Beat: the behavior expression animation toolkit. In: Proceedings of SIGGRAPH. (2001) 477--486
[28]
Carolis, B. D., Pelachaud, C., Poggi, I., de Rosis, F.: Behavior planning for a reflexive agent. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI'01), Seattle (Sept 2001)
[29]
Stone, M., DeCarlo, D., Oh, I., Rodriguez, C., Stere, A., Lees, A., Bregler, C.: Speaking with hands: creating animated conversational characters from recordings of human performance. ACM Transactions on Graphics (SIGGRAPH'04) 23(3) (2004) 506--513

Cited By

View all
  • (2025)Toward Understanding the Effects of Intelligence of a Virtual Character during an Immersive Jigsaw Puzzle Co-Solving TaskACM Transactions on Applied Perception10.1145/370082222:2(1-28)Online publication date: 24-Jan-2025
  • (2023)CUBE: Conversational User-Interface-Based EmbodimentProceedings of the 23rd ACM International Conference on Intelligent Virtual Agents10.1145/3570945.3607331(1-8)Online publication date: 19-Sep-2023
  • (2021)Rapport Between Humans and Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3477322.3477335(433-462)Online publication date: 10-Sep-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AAMAS '08: Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
May 2008
565 pages
ISBN:9780981738109

Sponsors

Publisher

International Foundation for Autonomous Agents and Multiagent Systems

Richland, SC

Publication History

Published: 12 May 2008

Check for updates

Author Tags

  1. character animation
  2. conversational characters
  3. virtual humans

Qualifiers

  • Research-article

Conference

AAMAS08
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,155 of 5,036 submissions, 23%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Toward Understanding the Effects of Intelligence of a Virtual Character during an Immersive Jigsaw Puzzle Co-Solving TaskACM Transactions on Applied Perception10.1145/370082222:2(1-28)Online publication date: 24-Jan-2025
  • (2023)CUBE: Conversational User-Interface-Based EmbodimentProceedings of the 23rd ACM International Conference on Intelligent Virtual Agents10.1145/3570945.3607331(1-8)Online publication date: 19-Sep-2023
  • (2021)Rapport Between Humans and Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3477322.3477335(433-462)Online publication date: 10-Sep-2021
  • (2020)Understanding the Predictability of Gesture Parameters from Speech and their Perceptual ImportanceProceedings of the 20th ACM International Conference on Intelligent Virtual Agents10.1145/3383652.3423882(1-8)Online publication date: 20-Oct-2020
  • (2020)ProSign+ A Cloud-Based Platform Supporting Inclusiveness in Public Communication2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC)10.1109/CCNC46108.2020.9045191(1-5)Online publication date: 10-Jan-2020
  • (2019)Multi-objective adversarial gesture generationProceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games10.1145/3359566.3360053(1-10)Online publication date: 28-Oct-2019
  • (2019)EVA: Generating Emotional Behavior of Virtual Agents using Expressive Features of Gait and GazeACM Symposium on Applied Perception 201910.1145/3343036.3343129(1-10)Online publication date: 19-Sep-2019
  • (2019)Nonverbal behavior in multimodal performancesThe Handbook of Multimodal-Multisensor Interfaces10.1145/3233795.3233803(219-262)Online publication date: 1-Jul-2019
  • (2019)FANTASIAMultimedia Tools and Applications10.1007/s11042-019-7362-578:10(13613-13648)Online publication date: 1-May-2019
  • (2018)Virtual Agent Interaction Framework (VAIF)Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3237383.3238131(2230-2232)Online publication date: 9-Jul-2018
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media