42,458 research outputs found
3D Human Face Reconstruction and 2D Appearance Synthesis
3D human face reconstruction has been an extensive research for decades due to its wide applications, such as animation, recognition and 3D-driven appearance synthesis. Although commodity depth sensors are widely available in recent years, image based face reconstruction are significantly valuable as images are much easier to access and store.
In this dissertation, we first propose three image-based face reconstruction approaches according to different assumption of inputs.
In the first approach, face geometry is extracted from multiple key frames of a video sequence with different head poses. The camera should be calibrated under this assumption.
As the first approach is limited to videos, we propose the second approach then focus on single image. This approach also improves the geometry by adding fine grains using shading cue. We proposed a novel albedo estimation and linear optimization algorithm in this approach.
In the third approach, we further loose the constraint of the input image to arbitrary in the wild images. Our proposed approach can robustly reconstruct high quality model even with extreme expressions and large poses.
We then explore the applicability of our face reconstructions on four interesting applications: video face beautification, generating personalized facial blendshape from image sequences, face video stylizing and video face replacement. We demonstrate great potentials of our reconstruction approaches on these real-world applications. In particular, with the recent surge of interests in VR/AR, it is increasingly common to see people wearing head-mounted displays. However, the large occlusion on face is a big obstacle for people to communicate in a face-to-face manner. Our another application is that we explore hardware/software solutions for synthesizing the face image with presence of HMDs. We design two setups (experimental and mobile) which integrate two near IR cameras and one color camera to solve this problem. With our algorithm and prototype, we can achieve photo-realistic results.
We further propose a deep neutral network to solve the HMD removal problem considering it as a face inpainting problem. This approach doesn\u27t need special hardware and run in real-time with satisfying results
HeadOn: Real-time Reenactment of Human Portrait Videos
We propose HeadOn, the first real-time source-to-target reenactment approach
for complete human portrait videos that enables transfer of torso and head
motion, face expression, and eye gaze. Given a short RGB-D video of the target
actor, we automatically construct a personalized geometry proxy that embeds a
parametric head, eye, and kinematic torso model. A novel real-time reenactment
algorithm employs this proxy to photo-realistically map the captured motion
from the source actor to the target actor. On top of the coarse geometric
proxy, we propose a video-based rendering technique that composites the
modified target portrait video via view- and pose-dependent texturing, and
creates photo-realistic imagery of the target actor under novel torso and head
poses, facial expressions, and gaze directions. To this end, we propose a
robust tracking of the face and torso of the source actor. We extensively
evaluate our approach and show significant improvements in enabling much
greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at
Siggraph'1
The Evolution of Stop-motion Animation Technique Through 120 Years of Technological Innovations
Stop-motion animation history has been put on paper by several scholars and practitioners who tried to organize 120 years of technological innovations and material experiments dealing with a huge literature. Bruce Holman (1975), Neil Pettigrew (1999), Ken Priebe (2010), Stefano Bessoni (2014), and more recently Adrián Encinas Salamanca (2017), provided the most detailed even tough partial attempts of systematization, and designed historical reconstructions by considering specific periods of time, film lengths or the use of stop-motion as special effect rather than an animation technique. This article provides another partial historical reconstruction of the evolution of stop-motion and outlines the main events that occurred in the development of this technique, following criteria based on the innovations in the technology of materials and manufacturing processes that have influenced the fabrication of puppets until the present day. The systematization follows a chronological order and takes into account events that changed the technique of a puppets’ manufacturing process as a consequence of the use of either new fabrication processes or materials. Starting from the accident that made the French film-pioneer Georges Méliès discover the trick of the replacement technique at the end of the nineteenth century, the reconstruction goes through 120 years of experiments and films. “Build up” puppets fabricated by the Russian puppet animator Ladislaw Starevicz with insect exoskeletons, the use of clay puppets and the innovations introduced by LAIKA entertainment in the last decade such as Stereoscopic photography and the 3D computer printed replacement pieces, and then the increasing influence of digital technologies in the process of puppet fabrication are some of the main considered events. Technology transfers, new materials’ features, innovations in the way of animating puppets, are the main aspects through which this historical analysis approaches the previously mentioned events. This short analysis is supposed to remind and demonstrate that stop-motion animation is an interdisciplinary occasion of both artistic expression and technological experimentation, and that its evolution and aesthetic is related to cultural, geographical and technological issues. Lastly, if the technology of materials and processes is a constantly evolving field, what future can be expected for this cinematographic technique? The article ends with this open question and without providing an answer it implicitly states the role of stop-motion as a driving force for innovations that come from other fields and are incentivized by the needs of this specific sector
On Face Segmentation, Face Swapping, and Face Perception
We show that even when face images are unconstrained and arbitrarily paired,
face swapping between them is actually quite simple. To this end, we make the
following contributions. (a) Instead of tailoring systems for face
segmentation, as others previously proposed, we show that a standard fully
convolutional network (FCN) can achieve remarkably fast and accurate
segmentations, provided that it is trained on a rich enough example set. For
this purpose, we describe novel data collection and generation routines which
provide challenging segmented face examples. (b) We use our segmentations to
enable robust face swapping under unprecedented conditions. (c) Unlike previous
work, our swapping is robust enough to allow for extensive quantitative tests.
To this end, we use the Labeled Faces in the Wild (LFW) benchmark and measure
the effect of intra- and inter-subject face swapping on recognition. We show
that our intra-subject swapped faces remain as recognizable as their sources,
testifying to the effectiveness of our method. In line with well known
perceptual studies, we show that better face swapping produces less
recognizable inter-subject results. This is the first time this effect was
quantitatively demonstrated for machine vision systems
Text-based Editing of Talking-head Video
Editing talking-head video to change the speech content or to remove filler words is challenging. We propose a novel method to edit talking-head video based on its transcript to produce a realistic output video in which the dialogue of the speaker has been modified, while maintaining a seamless audio-visual flow (i.e. no jump cuts). Our method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript, and an optimization strategy then chooses segments of the input corpus as base material. The annotated parameters corresponding to the selected segments are seamlessly stitched together and used to produce an intermediate video representation in which the lower half of the face is rendered with a parametric face model. Finally, a recurrent video generation network transforms this representation to a photorealistic video that matches the edited transcript. We demonstrate a large variety of edits, such as the addition, removal, and alteration of words, as well as convincing language translation and full sentence synthesis
- …