26,734 research outputs found
An improved Siamese network for face sketch recognition
Face sketch recognition identifies the face photo from a large face sketch dataset. Some traditional methods are typically used to reduce the modality gap between face photos and sketches and gain excellent recognition rate based on a pseudo image which is synthesized using the corresponded face photo. However, these methods cannot obtain better high recognition rate for all face sketch datasets, because the use of extracted features cannot lead to the elimination of the effect of different modalities' images. The feature representation of the deep convolutional neural networks as a feasible approach for identification involves wider applications than other methods. It is adapted to extract the features which eliminate the difference between face photos and sketches. The recognition rate is high for neural networks constructed by learning optimal local features, even if the input image shows geometric distortions. However, the case of overfitting leads to the unsatisfactory performance of deep learning methods on face sketch recognition tasks. Also, the sketch images are too simple to be used for extracting effective features. This paper aims to increase the matching rate using the Siamese convolution network architecture. The framework is used to extract useful features from each image pair to reduce the modality gap. Moreover, data augmentation is used to avoid overfitting. We explore the performance of three loss functions and compare the similarity between each image pair. The experimental results show that our framework is adequate for a composite sketch dataset. In addition, it reduces the influence of overfitting by using data augmentation and modifying the network structure
Deep Sketch-Photo Face Recognition Assisted by Facial Attributes
In this paper, we present a deep coupled framework to address the problem of
matching sketch image against a gallery of mugshots. Face sketches have the
essential in- formation about the spatial topology and geometric details of
faces while missing some important facial attributes such as ethnicity, hair,
eye, and skin color. We propose a cou- pled deep neural network architecture
which utilizes facial attributes in order to improve the sketch-photo
recognition performance. The proposed Attribute-Assisted Deep Con- volutional
Neural Network (AADCNN) method exploits the facial attributes and leverages the
loss functions from the facial attributes identification and face verification
tasks in order to learn rich discriminative features in a common em- bedding
subspace. The facial attribute identification task increases the inter-personal
variations by pushing apart the embedded features extracted from individuals
with differ- ent facial attributes, while the verification task reduces the
intra-personal variations by pulling together all the fea- tures that are
related to one person. The learned discrim- inative features can be well
generalized to new identities not seen in the training data. The proposed
architecture is able to make full use of the sketch and complementary fa- cial
attribute information to train a deep model compared to the conventional
sketch-photo recognition methods. Exten- sive experiments are performed on
composite (E-PRIP) and semi-forensic (IIIT-D semi-forensic) datasets. The
results show the superiority of our method compared to the state- of-the-art
models in sketch-photo recognition algorithm
High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks
Synthesizing face sketches from real photos and its inverse have many
applications. However, photo/sketch synthesis remains a challenging problem due
to the fact that photo and sketch have different characteristics. In this work,
we consider this task as an image-to-image translation problem and explore the
recently popular generative models (GANs) to generate high-quality realistic
photos from sketches and sketches from photos. Recent GAN-based methods have
shown promising results on image-to-image translation problems and
photo-to-sketch synthesis in particular, however, they are known to have
limited abilities in generating high-resolution realistic images. To this end,
we propose a novel synthesis framework called Photo-Sketch Synthesis using
Multi-Adversarial Networks, (PS2-MAN) that iteratively generates low resolution
to high resolution images in an adversarial way. The hidden layers of the
generator are supervised to first generate lower resolution images followed by
implicit refinement in the network to generate higher resolution images.
Furthermore, since photo-sketch synthesis is a coupled/paired translation
problem, we leverage the pair information using CycleGAN framework. Both Image
Quality Assessment (IQA) and Photo-Sketch Matching experiments are conducted to
demonstrate the superior performance of our framework in comparison to existing
state-of-the-art solutions. Code available at:
https://github.com/lidan1/PhotoSketchMAN.Comment: Accepted by 2018 13th IEEE International Conference on Automatic Face
& Gesture Recognition (FG 2018)(Oral
Face Identification and Clustering
In this thesis, we study two problems based on clustering algorithms. In the
first problem, we study the role of visual attributes using an agglomerative
clustering algorithm to whittle down the search area where the number of
classes is high to improve the performance of clustering. We observe that as we
add more attributes, the clustering performance increases overall. In the
second problem, we study the role of clustering in aggregating templates in a
1:N open set protocol using multi-shot video as a probe. We observe that by
increasing the number of clusters, the performance increases with respect to
the baseline and reaches a peak, after which increasing the number of clusters
causes the performance to degrade. Experiments are conducted using recently
introduced unconstrained IARPA Janus IJB-A, CS2, and CS3 face recognition
datasets
- …