6,143 research outputs found
Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition
This paper presents a robust and dynamic face recognition technique based on
the extraction and matching of devised probabilistic graphs drawn on SIFT
features related to independent face areas. The face matching strategy is based
on matching individual salient facial graph characterized by SIFT features as
connected to facial landmarks such as the eyes and the mouth. In order to
reduce the face matching errors, the Dempster-Shafer decision theory is applied
to fuse the individual matching scores obtained from each pair of salient
facial features. The proposed algorithm is evaluated with the ORL and the IITK
face databases. The experimental results demonstrate the effectiveness and
potential of the proposed face recognition technique also in case of partially
occluded faces.Comment: 8 pages, 2 figure
Project SEMACODE : a scale-invariant object recognition system for content-based queries in image databases
For the efficient management of large image databases, the automated characterization of images and the usage of that characterization for searching and ordering tasks is highly desirable. The purpose of the project SEMACODE is to combine the still unsolved problem of content-oriented characterization of images with scale-invariant object recognition and modelbased compression methods. To achieve this goal, existing techniques as well as new concepts related to pattern matching, image encoding, and image compression are examined. The resulting methods are integrated in a common framework with the aid of a content-oriented conception. For the application, an image database at the library of the university of Frankfurt/Main (StUB; about 60000 images), the required operations are developed. The search and query interfaces are defined in close cooperation with the StUB project āDigitized Colonial Picture Libraryā. This report describes the fundamentals and first results of the image encoding and object recognition algorithms developed within the scope of the project
View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation
The primate brain contains a hierarchy of visual areas, dubbed the ventral
stream, which rapidly computes object representations that are both specific
for object identity and relatively robust against identity-preserving
transformations like depth-rotations. Current computational models of object
recognition, including recent deep learning networks, generate these properties
through a hierarchy of alternating selectivity-increasing filtering and
tolerance-increasing pooling operations, similar to simple-complex cells
operations. While simulations of these models recapitulate the ventral stream's
progression from early view-specific to late view-tolerant representations,
they fail to generate the most salient property of the intermediate
representation for faces found in the brain: mirror-symmetric tuning of the
neural population to head orientation. Here we prove that a class of
hierarchical architectures and a broad set of biologically plausible learning
rules can provide approximate invariance at the top level of the network. While
most of the learning rules do not yield mirror-symmetry in the mid-level
representations, we characterize a specific biologically-plausible Hebb-type
learning rule that is guaranteed to generate mirror-symmetric tuning to faces
tuning at intermediate levels of the architecture
Learning midlevel image features for natural scene and texture classification
This paper deals with coding of natural scenes in order to extract semantic information. We present a new scheme to project natural scenes onto a basis in which each dimension encodes statistically independent information. Basis extraction is performed by independent component analysis (ICA) applied to image patches culled from natural scenes. The study of the resulting coding units (coding filters) extracted from well-chosen categories of images shows that they adapt and respond selectively to discriminant features in natural scenes. Given this basis, we define global and local image signatures relying on the maximal activity of filters on the input image. Locally, the construction of the signature takes into account the spatial distribution of the maximal responses within the image. We propose a criterion to reduce the size of the space of representation for faster computation. The proposed approach is tested in the context of texture classification (111 classes), as well as natural scenes classification (11 categories, 2037 images). Using a common protocol, the other commonly used descriptors have at most 47.7% accuracy on average while our method obtains performances of up to 63.8%. We show that this advantage does not depend on the size of the signature and demonstrate the efficiency of the proposed criterion to select ICA filters and reduce the dimensio
A Survey on Ear Biometrics
Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
A Comparison Study of Saliency Models for Fixation Prediction on Infants and Adults
Various saliency models have been developed over the years. The performance of saliency models is typically evaluated based on databases of experimentally recorded adult eye fixations. Although studies on infant gaze patterns have attracted much attention recently, saliency based models have not been widely applied for prediction of infant gaze patterns. In this study, we conduct a comprehensive comparison study of eight state-ofthe- art saliency models on predictions of experimentally captured fixations from infants and adults. Seven evaluation metrics are used to evaluate and compare the performance of saliency models. The results demonstrate a consistent performance of saliency models predicting adult fixations over infant fixations in terms of overlap, center fitting, intersection, information loss of approximation, and spatial distance between the distributions of saliency map and fixation map. In saliency and baselines models performance ranking, the results show that GBVS and Itti models are among the top three contenders, infants and adults have bias toward the centers of images, and all models and the center baseline model outperformed the chance baseline model
- ā¦