5,280 research outputs found
Fast Face-swap Using Convolutional Neural Networks
We consider the problem of face swapping in images, where an input identity
is transformed into a target identity while preserving pose, facial expression,
and lighting. To perform this mapping, we use convolutional neural networks
trained to capture the appearance of the target identity from an unstructured
collection of his/her photographs.This approach is enabled by framing the face
swapping problem in terms of style transfer, where the goal is to render an
image in the style of another one. Building on recent advances in this area, we
devise a new loss function that enables the network to produce highly
photorealistic results. By combining neural networks with simple pre- and
post-processing steps, we aim at making face swap work in real-time with no
input from the user
Demystifying Neural Style Transfer
Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches.Comment: Accepted by IJCAI 201
FaceShop: Deep Sketch-based Face Image Editing
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.Comment: 13 pages, 20 figure
NARRATE: A Normal Assisted Free-View Portrait Stylizer
In this work, we propose NARRATE, a novel pipeline that enables
simultaneously editing portrait lighting and perspective in a photorealistic
manner. As a hybrid neural-physical face model, NARRATE leverages complementary
benefits of geometry-aware generative approaches and normal-assisted physical
face models. In a nutshell, NARRATE first inverts the input portrait to a
coarse geometry and employs neural rendering to generate images resembling the
input, as well as producing convincing pose changes. However, inversion step
introduces mismatch, bringing low-quality images with less facial details. As
such, we further estimate portrait normal to enhance the coarse geometry,
creating a high-fidelity physical face model. In particular, we fuse the neural
and physical renderings to compensate for the imperfect inversion, resulting in
both realistic and view-consistent novel perspective images. In relighting
stage, previous works focus on single view portrait relighting but ignoring
consistency between different perspectives as well, leading unstable and
inconsistent lighting effects for view changes. We extend Total Relighting to
fix this problem by unifying its multi-view input normal maps with the physical
face model. NARRATE conducts relighting with consistent normal maps, imposing
cross-view constraints and exhibiting stable and coherent illumination effects.
We experimentally demonstrate that NARRATE achieves more photorealistic,
reliable results over prior works. We further bridge NARRATE with animation and
style transfer tools, supporting pose change, light change, facial animation,
and style transfer, either separately or in combination, all at a photographic
quality. We showcase vivid free-view facial animations as well as 3D-aware
relightable stylization, which help facilitate various AR/VR applications like
virtual cinematography, 3D video conferencing, and post-production.Comment: 14 pages,13 figures https://youtu.be/mP4FV3evmy
- …