31,826 research outputs found
FaceShop: Deep Sketch-based Face Image Editing
We present a novel system for sketch-based face image editing, enabling users
to edit images intuitively by sketching a few strokes on a region of interest.
Our interface features tools to express a desired image manipulation by
providing both geometry and color constraints as user-drawn strokes. As an
alternative to the direct user input, our proposed system naturally supports a
copy-paste mode, which allows users to edit a given image region by using parts
of another exemplar image without the need of hand-drawn sketching at all. The
proposed interface runs in real-time and facilitates an interactive and
iterative workflow to quickly express the intended edits. Our system is based
on a novel sketch domain and a convolutional neural network trained end-to-end
to automatically learn to render image regions corresponding to the input
strokes. To achieve high quality and semantically consistent results we train
our neural network on two simultaneous tasks, namely image completion and image
translation. To the best of our knowledge, we are the first to combine these
two tasks in a unified framework for interactive image editing. Our results
show that the proposed sketch domain, network architecture, and training
procedure generalize well to real user input and enable high quality synthesis
results without additional post-processing.Comment: 13 pages, 20 figure
A unified framework for subspace based face recognition.
Wang Xiaogang.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 88-91).Abstracts in English and Chinese.Abstract --- p.iAcknowledgments --- p.vTable of Contents --- p.viList of Figures --- p.viiiList of Tables --- p.xChapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Face recognition --- p.1Chapter 1.2 --- Subspace based face recognition technique --- p.2Chapter 1.3 --- Unified framework for subspace based face recognition --- p.4Chapter 1.4 --- Discriminant analysis in dual intrapersonal subspaces --- p.5Chapter 1.5 --- Face sketch recognition and hallucination --- p.6Chapter 1.6 --- Organization of this thesis --- p.7Chapter Chapter 2 --- Review of Subspace Methods --- p.8Chapter 2.1 --- PCA --- p.8Chapter 2.2 --- LDA --- p.9Chapter 2.3 --- Bayesian algorithm --- p.12Chapter Chapter 3 --- A Unified Framework --- p.14Chapter 3.1 --- PCA eigenspace --- p.16Chapter 3.2 --- Intrapersonal and extrapersonal subspaces --- p.17Chapter 3.3 --- LDA subspace --- p.18Chapter 3.4 --- Comparison of the three subspaces --- p.19Chapter 3.5 --- L-ary versus binary classification --- p.22Chapter 3.6 --- Unified subspace analysis --- p.23Chapter 3.7 --- Discussion --- p.26Chapter Chapter 4 --- Experiments on Unified Subspace Analysis --- p.28Chapter 4.1 --- Experiments on FERET database --- p.28Chapter 4.1.1 --- PCA Experiment --- p.28Chapter 4.1.2 --- Bayesian experiment --- p.29Chapter 4.1.3 --- Bayesian analysis in reduced PCA subspace --- p.30Chapter 4.1.4 --- Extract discriminant features from intrapersonal subspace --- p.33Chapter 4.1.5 --- Subspace analysis using different training sets --- p.34Chapter 4.2 --- Experiments on the AR face database --- p.36Chapter 4.2.1 --- "Experiments on PCA, LDA and Bayes" --- p.37Chapter 4.2.2 --- Evaluate the Bayesian algorithm for different transformation --- p.38Chapter Chapter 5 --- Discriminant Analysis in Dual Subspaces --- p.41Chapter 5.1 --- Review of LDA in the null space of and direct LDA --- p.42Chapter 5.1.1 --- LDA in the null space of --- p.42Chapter 5.1.2 --- Direct LDA --- p.43Chapter 5.1.3 --- Discussion --- p.44Chapter 5.2 --- Discriminant analysis in dual intrapersonal subspaces --- p.45Chapter 5.3 --- Experiment --- p.50Chapter 5.3.1 --- Experiment on FERET face database --- p.50Chapter 5.3.2 --- Experiment on the XM2VTS database --- p.53Chapter Chapter 6 --- Eigentransformation: Subspace Transform --- p.54Chapter 6.1 --- Face sketch recognition --- p.54Chapter 6.1.1 --- Eigentransformation --- p.56Chapter 6.1.2 --- Sketch synthesis --- p.59Chapter 6.1.3 --- Face sketch recognition --- p.61Chapter 6.1.4 --- Experiment --- p.63Chapter 6.2 --- Face hallucination --- p.69Chapter 6.2.1 --- Multiresolution analysis --- p.71Chapter 6.2.2 --- Eigentransformation for hallucination --- p.72Chapter 6.2.3 --- Discussion --- p.75Chapter 6.2.4 --- Experiment --- p.77Chapter 6.3 --- Discussion --- p.83Chapter Chapter 7 --- Conclusion --- p.85Publication List of This Thesis --- p.87Bibliography --- p.8
Deep Sketch-Photo Face Recognition Assisted by Facial Attributes
In this paper, we present a deep coupled framework to address the problem of
matching sketch image against a gallery of mugshots. Face sketches have the
essential in- formation about the spatial topology and geometric details of
faces while missing some important facial attributes such as ethnicity, hair,
eye, and skin color. We propose a cou- pled deep neural network architecture
which utilizes facial attributes in order to improve the sketch-photo
recognition performance. The proposed Attribute-Assisted Deep Con- volutional
Neural Network (AADCNN) method exploits the facial attributes and leverages the
loss functions from the facial attributes identification and face verification
tasks in order to learn rich discriminative features in a common em- bedding
subspace. The facial attribute identification task increases the inter-personal
variations by pushing apart the embedded features extracted from individuals
with differ- ent facial attributes, while the verification task reduces the
intra-personal variations by pulling together all the fea- tures that are
related to one person. The learned discrim- inative features can be well
generalized to new identities not seen in the training data. The proposed
architecture is able to make full use of the sketch and complementary fa- cial
attribute information to train a deep model compared to the conventional
sketch-photo recognition methods. Exten- sive experiments are performed on
composite (E-PRIP) and semi-forensic (IIIT-D semi-forensic) datasets. The
results show the superiority of our method compared to the state- of-the-art
models in sketch-photo recognition algorithm
Recovering Faces from Portraits with Auxiliary Facial Attributes
Recovering a photorealistic face from an artistic portrait is a challenging
task since crucial facial details are often distorted or completely lost in
artistic compositions. To handle this loss, we propose an Attribute-guided Face
Recovery from Portraits (AFRP) that utilizes a Face Recovery Network (FRN) and
a Discriminative Network (DN). FRN consists of an autoencoder with residual
block-embedded skip-connections and incorporates facial attribute vectors into
the feature maps of input portraits at the bottleneck of the autoencoder. DN
has multiple convolutional and fully-connected layers, and its role is to
enforce FRN to generate authentic face images with corresponding facial
attributes dictated by the input attribute vectors. %Leveraging on the spatial
transformer networks, FRN automatically compensates for misalignments of
portraits. % and generates aligned face images. For the preservation of
identities, we impose the recovered and ground-truth faces to share similar
visual features. Specifically, DN determines whether the recovered image looks
like a real face and checks if the facial attributes extracted from the
recovered image are consistent with given attributes. %Our method can recover
high-quality photorealistic faces from unaligned portraits while preserving the
identity of the face images as well as it can reconstruct a photorealistic face
image with a desired set of attributes. Our method can recover photorealistic
identity-preserving faces with desired attributes from unseen stylized
portraits, artistic paintings, and hand-drawn sketches. On large-scale
synthesized and sketch datasets, we demonstrate that our face recovery method
achieves state-of-the-art results.Comment: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV
- …