skip to main content
10.1145/1322192.1322217acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
poster

Real-time expression cloning using appearance models

Published: 12 November 2007 Publication History

Abstract

Active Appearance Models (AAMs) are generative parametric models commonly used to track, recognise and synthesise faces in images and video sequences. In this paper we describe a method for transferring dynamic facial gestures between subjects in real-time. The main advantages of our approach are that: 1) the mapping is computed automatically and does not require high-level semantic information describing facial expressions or visual speech gestures. 2) The mapping is simple and intuitive, allowing expressions to be transferred and rendered in real-time. 3) The mapped expression can be constrained to have the appearance of the target producing the expression, rather than the source expression imposed onto the target face. 4) Near-videorealistic talking faces for new subjects can be created without the cost of recording and processing a complete training corpus for each. Our system enables face-to-face interaction with an avatar driven by an AAM of an actual person in real-time and we show examples of arbitrary expressive speech frames cloned across different subjects.

References

[1]
V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. In Eurographics, pages 641--650, 2003.
[2]
V. Blanz and T. Vetter. A morphable model for the synthesis of 3D faces. In Proceedings of SIGGRAPH, pages 187--194, 1999.
[3]
M. Brand. Voice puppetry. In Proceedings of SIGGRAPH, pages 21--28, 1999.
[4]
C. Bregler, M. Covell, and M. Slaney. Video rewrite: Driving visual speech with audio. In Proceedings of SIGGRAPH, pages 353--360, 1997.
[5]
Y. Chang and T. Ezzat. Transferable videorealistic speech animation. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 29--31, 2005.
[6]
B. Choe, H. Lee, and H. Ko. Performance-driven muscle-based facial animation. Journal of Visualization and Computer Animation, 11(2):67--79, May 2001.
[7]
E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. Technical Report CS--TR--2002--02, Stanford University, April 2002.
[8]
T. Cootes, G. Edwards, and C. Taylor. Active appearance models. IEEE Transactions on Pattern Analysis and Machine Intel ligence, 23(6):681--685, June 2001.
[9]
E. Cosatto and H. Graf. Sample-based synthesis of photorealistic talking heads. In Proceedings of Computer Animation, pages 103--110, Philadelphia, Pennsylvania, June 1998.
[10]
T. Ezzat, G. Geiger, and T. Poggio. Trainable videorealistic speech animation. In Proceedings of SIGGRAPH, pages 388--398, 2002.
[11]
T. Ezzat and T. Poggio. Miketalk: A talking facial display based on morphing visemes. In Proceedings of the Computer Animation Conference, pages 96--103, Philadelphia, Pennsylvania, 1998.
[12]
B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. Making faces. In Proceedings of SIGGRAPH, pages 55--66, 1998.
[13]
T. Hawkins, A. Wenger, C. Tchou, A. Gardner, F. Goransson, and P. Debevec. Animatable facial reflectance fields. In Eurographics Symposium on Rendering, June 2004.
[14]
Y. Lee, D. Terzopoulos, and K. Waters. Realistic modeling for facial animation. In Proceedings of SIGGRAPH, pages 55--62, 1995.
[15]
Z. Liu, Y. Shan, and Z. Zhang. Expressive expression mapping with ratio images. In SIGGRAPH, pages 271--276, 2001.
[16]
I. Macedo, E. Vital Brazil, and L. Velho. Expression transfer between photographs through multilinear aam's. In Brazilian Symposium on Computer Graphics and Image Processing, pages 239--246, 2006.
[17]
D. Massaro. Perceiving Talking Faces. The MIT Press, 1998.
[18]
J. Noh and U. Neumann. Expression cloning. In SIGGRAPH, pages 277--288, 2001.
[19]
F. Parke. Parametric models for facial animation. Computer Graphics and Applications, 2(9):61--68, 1982.
[20]
F. Parke and K. Waters. Computer Facial Animation. A K Peters, 1996.
[21]
F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. Salesin. Synthesizing realistic facial expressions from photographs. In Proceedings of SIGGRAPH, pages 75--84, 1998.
[22]
D. Terzopoulos and K. Waters. Physically-based facial modelling, analysis and animation. Journal of Visualization and Computer Animation, 1(2):73--80, 1990.
[23]
B. Theobald, J. Bangham, I. Matthews, and G. Cawley. Visual speech synthesis using statistical models of shape and appearance. In Proceedings of Auditory Visual Speech Processing, pages 78--83, 2001.
[24]
B. Theobald, J. Bangham, I. Matthews, and G. Cawley. Near-videorealistic synthetic talking faces: Implementation and evaluation. Speech Communication, 44:127--140, 2004.
[25]
D. Vlasic, M. Brand, H. Pfister, and J. Popovic. Face transfer with multilinear models. ACM Transactions on Graphcs, 24(3):426--433, 2005.
[26]
K. Waters. A muscle model for animating three-dimensional facial expressions. In Proceedings of SIGGRAPH, pages 17--24, 1987.
[27]
L. Williams. Performance driven facial animation. Computer Graphics, 24(2):235--242, 1990.
[28]
Q. Zhang, Z. Liu, G. Quo, D. Terzopoulos, and H. Shum. Geometry-driven photorealistic facial expression synthesis. IEEE Transactions on Visualization and Compuer Graphics, 12(1):48--60, 2006.

Cited By

View all
  • (2021)F³A-GAN: Facial Flow for Face Animation With Generative Adversarial NetworksIEEE Transactions on Image Processing10.1109/TIP.2021.311205930(8658-8670)Online publication date: 2021
  • (2015)VDubComputer Graphics Forum10.1111/cgf.1255234:2(193-204)Online publication date: 1-May-2015
  • (2014)Retaining expressions on de-identified faces2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)10.1109/MIPRO.2014.6859759(1252-1257)Online publication date: May-2014
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICMI '07: Proceedings of the 9th international conference on Multimodal interfaces
November 2007
402 pages
ISBN:9781595938176
DOI:10.1145/1322192
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 November 2007

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. active appearance models
  2. expression cloning
  3. facial animation

Qualifiers

  • Poster

Conference

ICMI07
Sponsor:
ICMI07: International Conference on Multimodal Interface
November 12 - 15, 2007
Aichi, Nagoya, Japan

Acceptance Rates

Overall Acceptance Rate 453 of 1,080 submissions, 42%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2021)F³A-GAN: Facial Flow for Face Animation With Generative Adversarial NetworksIEEE Transactions on Image Processing10.1109/TIP.2021.311205930(8658-8670)Online publication date: 2021
  • (2015)VDubComputer Graphics Forum10.1111/cgf.1255234:2(193-204)Online publication date: 1-May-2015
  • (2014)Retaining expressions on de-identified faces2014 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)10.1109/MIPRO.2014.6859759(1252-1257)Online publication date: May-2014
  • (2014)Instance-Weighted Transfer Learning of Active Appearance ModelsProceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition10.1109/CVPR.2014.185(1426-1433)Online publication date: 23-Jun-2014
  • (2014)Facial expression cloning with elastic and muscle modelsJournal of Visual Communication and Image Representation10.1016/j.jvcir.2014.02.01025:5(916-927)Online publication date: Jul-2014
  • (2014)Using Bounded Diameter Minimum Spanning Trees to Build Dense Active Appearance ModelsInternational Journal of Computer Vision10.1007/s11263-013-0661-9110:1(48-57)Online publication date: 1-Oct-2014
  • (2013)Control of speech-related facial movements of an avatar from videoSpeech Communication10.1016/j.specom.2012.07.00155:1(135-146)Online publication date: 1-Jan-2013
  • (2012)Facial Performance Transfer via Deformable Models and Parametric CorrespondenceIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2011.15718:9(1511-1519)Online publication date: 1-Sep-2012
  • (2012)Facial expression mapping based on elastic and muscle-distribution-based models2012 IEEE International Symposium on Circuits and Systems10.1109/ISCAS.2012.6271860(2685-2688)Online publication date: May-2012
  • (2012)Techniques for mimicry and identity blending using morph space PCAProceedings of the 11th international conference on Computer Vision - Volume 210.1007/978-3-642-37484-5_25(296-307)Online publication date: 5-Nov-2012
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media