1,665 research outputs found
Age Progression and Regression with Spatial Attention Modules
Age progression and regression refers to aesthetically render-ing a given
face image to present effects of face aging and rejuvenation, respectively.
Although numerous studies have been conducted in this topic, there are two
major problems: 1) multiple models are usually trained to simulate different
age mappings, and 2) the photo-realism of generated face images is heavily
influenced by the variation of training images in terms of pose, illumination,
and background. To address these issues, in this paper, we propose a framework
based on conditional Generative Adversarial Networks (cGANs) to achieve age
progression and regression simultaneously. Particularly, since face aging and
rejuvenation are largely different in terms of image translation patterns, we
model these two processes using two separate generators, each dedicated to one
age changing process. In addition, we exploit spatial attention mechanisms to
limit image modifications to regions closely related to age changes, so that
images with high visual fidelity could be synthesized for in-the-wild cases.
Experiments on multiple datasets demonstrate the ability of our model in
synthesizing lifelike face images at desired ages with personalized features
well preserved, and keeping age-irrelevant regions unchanged
Semantic-aware One-shot Face Re-enactment with Dense Correspondence Estimation
One-shot face re-enactment is a challenging task due to the identity mismatch
between source and driving faces. Specifically, the suboptimally disentangled
identity information of driving subjects would inevitably interfere with the
re-enactment results and lead to face shape distortion. To solve this problem,
this paper proposes to use 3D Morphable Model (3DMM) for explicit facial
semantic decomposition and identity disentanglement. Instead of using 3D
coefficients alone for re-enactment control, we take the advantage of the
generative ability of 3DMM to render textured face proxies. These proxies
contain abundant yet compact geometric and semantic information of human faces,
which enable us to compute the face motion field between source and driving
images by estimating the dense correspondence. In this way, we could
approximate re-enactment results by warping source images according to the
motion field, and a Generative Adversarial Network (GAN) is adopted to further
improve the visual quality of warping results. Extensive experiments on various
datasets demonstrate the advantages of the proposed method over existing
start-of-the-art benchmarks in both identity preservation and re-enactment
fulfillment
Dynamic resistance measurement in a four-tape YBCO stack with various applied field orientation
The dynamic resistance which occurs when a superconductor carrying DC current is exposed to alternating magnetic field plays an important role in HTS applications such as flux pumps and rotating machines. We report experimental results on dynamic resistance in a four-tape coated conductor stack when exposed to AC magnetic fields with different magnetic field angles (the angles between the magnetic field and normal vector component of the tape surface, θ) at 77 K. The conductors for the stack are 4-mm-wide SuperPower SC4050 wires. The field angle was varied from 0° to 120° at a resolution of 15° to study the field angle dependence of dynamic resistance on field angle as well as wire Ic (B, θ). We also varied the field frequency, the magnetic field amplitude, and the DC current level to study the dependence of dynamic resistance on these parameters. Finally, we compared the measured dynamic resistance results at perpendicular magnetic field with the analytical models for single wires. Our results show that the dynamic resistance of the stack was mainly, but not solely, determined by the perpendicular magnetic component. Ic (B, θ) influences dynamic resistance in the stack due to tilting of the crystal lattice of the superconductor layer with regard to buffer layers.
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
Learning Explicit Contact for Implicit Reconstruction of Hand-held Objects from Monocular Images
Reconstructing hand-held objects from monocular RGB images is an appealing
yet challenging task. In this task, contacts between hands and objects provide
important cues for recovering the 3D geometry of the hand-held objects. Though
recent works have employed implicit functions to achieve impressive progress,
they ignore formulating contacts in their frameworks, which results in
producing less realistic object meshes. In this work, we explore how to model
contacts in an explicit way to benefit the implicit reconstruction of hand-held
objects. Our method consists of two components: explicit contact prediction and
implicit shape reconstruction. In the first part, we propose a new subtask of
directly estimating 3D hand-object contacts from a single image. The part-level
and vertex-level graph-based transformers are cascaded and jointly learned in a
coarse-to-fine manner for more accurate contact probabilities. In the second
part, we introduce a novel method to diffuse estimated contact states from the
hand mesh surface to nearby 3D space and leverage diffused contact
probabilities to construct the implicit neural representation for the
manipulated object. Benefiting from estimating the interaction patterns between
the hand and the object, our method can reconstruct more realistic object
meshes, especially for object parts that are in contact with hands. Extensive
experiments on challenging benchmarks show that the proposed method outperforms
the current state of the arts by a great margin.Comment: 17 pages, 8 figure
- …