5,019 research outputs found
Matching Image Sets via Adaptive Multi Convex Hull
Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.Comment: IEEE Winter Conference on Applications of Computer Vision (WACV),
201
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
End-to-end 3D face reconstruction with deep neural networks
Monocular 3D facial shape reconstruction from a single 2D facial image has
been an active research area due to its wide applications. Inspired by the
success of deep neural networks (DNN), we propose a DNN-based approach for
End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different
from recent works that reconstruct and refine the 3D face in an iterative
manner using both an RGB image and an initial 3D facial shape rendering, our
DNN model is end-to-end, and thus the complicated 3D rendering process can be
avoided. Moreover, we integrate in the DNN architecture two components, namely
a multi-task loss function and a fusion convolutional neural network (CNN) to
improve facial expression reconstruction. With the multi-task loss function, 3D
face reconstruction is divided into neutral 3D facial shape reconstruction and
expressive 3D facial shape reconstruction. The neutral 3D facial shape is
class-specific. Therefore, higher layer features are useful. In comparison, the
expressive 3D facial shape favors lower or intermediate layer features. With
the fusion-CNN, features from different intermediate layers are fused and
transformed for predicting the 3D expressive facial shape. Through extensive
experiments, we demonstrate the superiority of our end-to-end framework in
improving the accuracy of 3D face reconstruction.Comment: Accepted to CVPR1
- …