25,371 research outputs found
Pose Invariant 3D Face Authentication based on Gaussian Fields Approach
This thesis presents a novel illuminant invariant approach to recognize the identity of an individual from his 3D facial scan in any pose, by matching it with a set of frontal models stored in the gallery. In view of today’s security concerns, 3D face reconstruction and recognition has gained a significant position in computer vision research. The non intrusive nature of facial data acquisition makes face recognition one of the most popular approaches for biometrics-based identity recognition. Depth information of a 3D face can be used to solve the problems of illumination and pose variation associated with face recognition.
The proposed method makes use of 3D geometric (point sets) face representations for recognizing faces. The use of 3D point sets to represent human faces in lieu of 2D texture makes this method robust to changes in illumination and pose. The method first automatically registers facial point-sets of the probe with the gallery models through a criterion based on Gaussian force fields. The registration method defines a simple energy function, which is always differentiable and convex in a large neighborhood of the alignment parameters; allowing for the use of powerful standard optimization techniques. The new method overcomes the necessity of close initialization and converges in much less iterations as compared to the Iterative Closest Point algorithm. The use of an optimization method, the Fast Gauss Transform, allows a considerable reduction in the computational complexity of the registration algorithm. Recognition is then performed by using the robust similarity score generated by registering 3D point sets of faces. Our approach has been tested on a large database of 85 individuals with 521 scans at different poses, where the gallery and the probe images have been acquired at significantly different times. The results show the potential of our approach toward a fully pose and illumination invariant system. Our method can be successfully used as a potential biometric system in various applications such as mug shot matching, user verification and access control, and enhanced human computer interaction
Camera Independent Face Recognition Algorithm In Visual Surveillance
Face recognition in visual surveillance has the ability to reduce crime rates in public area due to the suspect’s identity can be automatically identified in real-time using the face images captured by the surveillance camera as circumstantial evidence. Several available image preprocessing techniques, classifiers, and approaches had been proposed and tested to mitigate the effect of illumination variation, pose variations, and intensity quality differences due to hardware differences in such system. The face recognition system should be able to integrate seamlessly into the existing system. From the experiments, Histogram Equalization (HE) preprocessed
face images scaled to 30�30 had proven to be well suited for pre-processing of surveillance images. The combination of Linear Discriminant Analysis (LDA) and HE preprocessed images managed to achieve an average recognition rate of 81.48% for the single camera training set. The flandmark facial landmark detector is implemented to determine the location of the eyes and new face images are obtained by cropping the HE pre-processed images. The combination of flandmark images at 20�30 with multi-class Support Vector Machine (SVM) is used to form a multimodal classification system with LDA and HE combination. Score level fusion is done to the normalized output scores of both the classifiers with proper weight, w assigned to each score. Finally, the watch list principle will list out several possible subjects according to their respective score ranking rather than deciding on a particular subject based on the maximum score, thus increasing the performance of the proposed system. The experimental results demonstrate the performance of the proposed algorithm on Surveillance Camera Face Database (SCface) database with 97.45% average recognition rate
Reference face graph for face recognition
Face recognition has been studied extensively; however, real-world face recognition still remains a challenging task. The demand for unconstrained practical face recognition is rising with the explosion of online multimedia such as social networks, and video surveillance footage where face analysis is of significant importance. In this paper, we approach face recognition in the context of graph theory. We recognize an unknown face using an external reference face graph (RFG). An RFG is generated and recognition of a given face is achieved by comparing it to the faces in the constructed RFG. Centrality measures are utilized to identify distinctive faces in the reference face graph. The proposed RFG-based face recognition algorithm is robust to the changes in pose and it is also alignment free. The RFG recognition is used in conjunction with DCT locality sensitive hashing for efficient retrieval to ensure scalability. Experiments are conducted on several publicly available databases and the results show that the proposed approach outperforms the state-of-the-art methods without any preprocessing necessities such as face alignment. Due to the richness in the reference set construction, the proposed method can also handle illumination and expression variation
Face Identification and Clustering
In this thesis, we study two problems based on clustering algorithms. In the
first problem, we study the role of visual attributes using an agglomerative
clustering algorithm to whittle down the search area where the number of
classes is high to improve the performance of clustering. We observe that as we
add more attributes, the clustering performance increases overall. In the
second problem, we study the role of clustering in aggregating templates in a
1:N open set protocol using multi-shot video as a probe. We observe that by
increasing the number of clusters, the performance increases with respect to
the baseline and reaches a peak, after which increasing the number of clusters
causes the performance to degrade. Experiments are conducted using recently
introduced unconstrained IARPA Janus IJB-A, CS2, and CS3 face recognition
datasets
Automatic Face Recognition System Based on Local Fourier-Bessel Features
We present an automatic face verification system inspired by known properties
of biological systems. In the proposed algorithm the whole image is converted
from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT).
Using the whole image is compared to the case where only face image regions
(local analysis) are considered. The resulting representations are embedded in
a dissimilarity space, where each image is represented by its distance to all
the other images, and a Pseudo-Fisher discriminator is built. Verification test
results on the FERET database showed that the local-based algorithm outperforms
the global-FBT version. The local-FBT algorithm performed as state-of-the-art
methods under different testing conditions, indicating that the proposed system
is highly robust for expression, age, and illumination variations. We also
evaluated the performance of the proposed system under strong occlusion
conditions and found that it is highly robust for up to 50% of face occlusion.
Finally, we automated completely the verification system by implementing face
and eye detection algorithms. Under this condition, the local approach was only
slightly superior to the global approach.Comment: 2005, Brazilian Symposium on Computer Graphics and Image Processing,
18 (SIBGRAPI
Efficient illumination independent appearance-based face tracking
One of the major challenges that visual tracking algorithms face nowadays is being
able to cope with changes in the appearance of the target during tracking. Linear
subspace models have been extensively studied and are possibly the most popular
way of modelling target appearance. We introduce a linear subspace representation
in which the appearance of a face is represented by the addition of two approxi-
mately independent linear subspaces modelling facial expressions and illumination
respectively. This model is more compact than previous bilinear or multilinear ap-
proaches. The independence assumption notably simplifies system training. We only
require two image sequences. One facial expression is subject to all possible illumina-
tions in one sequence and the face adopts all facial expressions under one particular
illumination in the other. This simple model enables us to train the system with
no manual intervention. We also revisit the problem of efficiently fitting a linear
subspace-based model to a target image and introduce an additive procedure for
solving this problem. We prove that Matthews and Baker’s Inverse Compositional
Approach makes a smoothness assumption on the subspace basis that is equiva-
lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs
from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap-
proaches in that we make no smoothness assumptions on the subspace basis. In the
experiments conducted we show that the model introduced accurately represents
the appearance variations caused by illumination changes and facial expressions.
We also verify experimentally that our fitting procedure is more accurate and has
better convergence rate than the other related approaches, albeit at the expense of
a slight increase in computational cost. Our approach can be used for tracking a
human face at standard video frame rates on an average personal computer
Creating invariance to "nuisance parameters" in face recognition
A major goal for face recognition is to identify faces where the pose of the probe is different from the stored face. Typical feature vectors vary more with pose than with identity, leading to very poor recognition performance. We propose a non-linear many-to-one mapping from a conventional feature space to a new space constructed so that each individual has a unique feature vector regardless of pose. Training data is used to implicitly parameterize the position of the multi-dimensional face manifold by pose. We introduce a co-ordinate transform, which depends on the position on the manifold. This transform is chosen so that different poses of the same face are mapped to the same feature vector. The same approach is applied to illumination changes. We investigate different methods for creating features, which are invariant to both pose and illumination. We provide a metric to assess the discriminability of the resulting features. Our technique increases the discriminability of faces under unknown pose and lighting compared to contemporary methods
Empirical mode decomposition-based facial pose estimation inside video sequences
We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions
- …