• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
Chair of Visual Computing
  • FAUTo the central FAU website
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Informatik
Suche öffnen
  • Campo
  • StudOn
  • FAUdir
  • Jobs
  • Map
  • Help
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Informatik

Chair of Visual Computing

Navigation Navigation close
  • Research
    • Rendering and Visualization
    • Geometric Modeling and 3D Reconstruction
    • Virtual, Mixed, and Augmented Reality
    • Visual Computing for Digital Humanities and Social Sciences
    • Visual Healthcare Computing
    Research
  • Publications
  • Teaching
    • Vertiefungsrichtung Visual Computing
    • Summer Term 2025
    • Winter Term 2024/25
    • Theses
    Teaching
  • Staff
  • Arrival and Contact
  1. Home
  2. Publications
  3. Real-time Expression Transfer for Facial Reenactment

Real-time Expression Transfer for Facial Reenactment

In page navigation: Publications
  • Adaptive stray-light compensation in dynamic multi-projection mapping
  • Adaptive Temporal Sampling for Volumetric Path Tracing of Medical Data
  • Analytic Displacement Mapping using Hardware Tessellation
  • Anisotropic Surface Based Deformation
  • Auto-Calibration for Dynamic Multi-Projection Mapping on Arbitrary Surfaces
  • Automated Heart Localization in Cardiac Cine MR Data
  • Demo of Face2Face: Real-time Face Capture and Reenactment of RGB Videos
  • Enhanced Sphere Tracing
  • Evaluating the Usability of Recent Consumer-Grade 3D Input Devices
  • Face2Face: Real-time Face Capture and Reenactment of RGB Videos
  • FaceForge: Markerless Non-Rigid Face Multi-Projection Mapping
  • FaceInCar: Real-time Dense Monocular Face Tracking of a Driver
  • FaceVR: Real-Time Facial Reenactment and Eye Gaze Control in Virtual Reality
  • GroPBS: Fast Solver for Implicit Electrostatics of Biomolecules
  • Grundsätzliche Überlegungen zur Edition des Bestandes an Münzen der FAU als frei zugängliche Datenbank im WWW
  • HeadOn: Real-time Reenactment of Human Portrait Videos
  • Hierarchical Multi-Layer Screen-Space Ray Tracing
  • Hybrid Mono-Stereo Rendering in Virtual Reality
  • Interactive Model-based Reconstruction of the Human Head using an RGB-D Sensor
  • Interactive Painting and Lighting in Dynamic Multi-Projection Mapping
  • Learning Real-Time Ambient Occlusion from Distance Representations
  • Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites using an RGB-D Camera
  • Multi-Layer Depth of Field Rendering with Tiled Splatting
  • Multi-Resolution Attributes for Hardware Tessellated Objects
  • Real-time 3D Reconstruction at Scale using Voxel Hashing
  • Real-time Collision Detection for Dynamic Hardware Tessellated Objects
  • Real-time Expression Transfer for Facial Reenactment
  • Real-time Local Displacement using Dynamic GPU Memory Management
  • Real-Time Pixel Luminance Optimization for Dynamic Multi-Projection Mapping
  • Reality Forge: Interactive Dynamic Multi-Projection Mapping
  • Robust Blending and Occlusion Compensation in Dynamic Multi-Projection Mapping
  • Shape Adaptive Cut Lines
  • Spherical Fibonacci Mapping
  • State of the Art Report on Real-time Rendering with Hardware Tessellation
  • Stray-Light Compensation in Dynamic Projection Mapping
  • Visualization and Deformation Techniques for Entertainment and Training in Cultural Heritage
  • VolumeDeform: Real-time Volumetric Non-rigid Reconstruction

Real-time Expression Transfer for Facial Reenactment

Dr.-Ing. Justus Thies

  • Email: justus.thies@tum.de
  • Website: http://niessnerlab.org/members/justus_thies/profile.html
Short description: Justus Thies ist inzwischen PostDoc an der TUM.

Downloads

We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photo-realistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.

Display external content

At this point content of an external provider (source: YouTube) is integrated. When displaying, data may be transferred to third parties or cookies may be stored, therefore your consent is required.

You can find more information and the possibility to revoke your consent in our privacy policy.

I agree

Chair of Visual Computing
(Lehrstuhl für Graphische Datenverarbeitung)

Cauerstraße 11
91058 Erlangen
Deutschland
  • Imprint
  • Privacy
  • Facebook
  • RSS Feed
  • Xing
Up