Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Real-time Facial Performance Capture and Manipulation

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • Contributors:
      Deng, Zhigang; Mayerich, David; Chen, Guoning; Shah, Shishir Kirit
    • الموضوع:
      2020
    • Collection:
      University of Houston Institutional Repository (UHIR)
    • نبذة مختصرة :
      Acquisition and editing of facial performance is an essential and challenging task in computer graphics, with broad applications in films, cartoons, VR systems, and electronic games. The creation of high-resolution, realistic facial animations often involves controlled lighting setups, multiple cameras, active markers, depth sensors, and substantial post-editing from experienced artists. This dissertation focuses on the capture and manipulation of facial performance from regular RGB video. First, a novel method is proposed to reconstruct high-resolution facial geometry and appearance in real-time by capturing an individual-specific face model with fine-scale details, based on monocular RGB video input. Specifically, after the coarse facial model is reconstructed from the input video, it is subsequently refined using shape-from-shading techniques, where illumination, albedo texture, and displacements are recovered by minimizing the difference between the synthesized face and the input RGB video. To recover wrinkle level details, a hierarchical face pyramid is built through adaptive subdivisions and progressive refinements of the mesh from a coarse level to a fine level. The proposed approach can produce results close to off-line methods and better than previous real-time methods. On top of the reconstruction method, two manipulation approaches upon facial expressions and facial appearance are proposed, namely facial expression transformation and face swapping. In facial expression transformation, desired and photo-realistic facial expressions are directly generated on top of input monocular RGB video without the need of any driving source actor. An unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. The proposed method automatically transforms the source expression in an input video clip to a specified target expression through the combination of the 3D face reconstruction, the learned bi-directional expression mapping, and automatic lip ...
    • File Description:
      application/pdf; born digital
    • Relation:
      Portions of this document appear in: Ma, Luming, and Zhigang Deng. "Real-time hierarchical facial performance capture." In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 1-10. 2019. And in: Ma, Luming, and Zhigang Deng. "Real‐Time Facial Expression Transformation for Monocular RGB Video." In Computer Graphics Forum, vol. 38, no. 1, pp. 470-481. 2019. And in: Ma, Luming, and Zhigang Deng. "Real-time Face Video Swapping From A Single Portrait." In Symposium on Interactive 3D Graphics and Games, pp. 1-10. 2020.; https://hdl.handle.net/10657/6693
    • Rights:
      The author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. UH Libraries has secured permission to reproduce any and all previously published materials contained in the work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
    • الرقم المعرف:
      edsbas.3CF38F93