575,699 research outputs found

    Automatic landmark detection and face recognition for side-view face images

    Get PDF
    Abstract: In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to determine their location in the house. Here, we introduce a recognition method, where we detect facial landmarks automatically for registration and identify faces. We test our system on side-view face images from CMU-Multi PIE database. We achieve 95.95 % accuracy on detecting landmarks, and 89.04 % accuracy on identification.

    Real-time acquisition of multi-view face images to support robust face recognition using a wireless camera network

    Get PDF
    Recent terror attacks, intrusion attempts and criminal activities have necessitated a transition to modern biometric systems that are capable of identifying suspects in real time. But real-time biometrics is challenging given the computationally intensive nature of video processing and the potential occlusions and variations in pose of a subject in an unconstrained environment. The objective of this dissertation is to utilize the robustness and parallel computational abilities of a distributed camera network for fast and robust face recognition.;In order to support face recognition using a camera network, a collaborative middle-ware service is designed that enables the rapid extraction of multi-view face images of multiple subjects moving through a region. This service exploits the epipolar geometry between cameras to speed up multi view face detection rates. By quickly detecting face images within the network, labeling the pose of each face image, filtering them based on their suitability of recognition and transmitting only the resultant images to a base station for recognition, both the required network bandwidth and centralized processing overhead are reduced. The performance of the face image acquisition system is evaluated using an embedded camera network that is deployed in indoor environments that mimic walkways in public places. The relevance of the acquired images for recognition is evaluated by using a commercial software for matching acquired probe images. The experimental results demonstrate significant improvement in face recognition system performance over traditional systems as well as increase in multi-view face detection rate over purely image processing based approaches

    Pose Invariant Face Recognition and Tracking for Human Identification

    Get PDF
    Real-time tracking and recognition of people in complex environments has been a widely researched area in computer vision as it has a huge potential in efficient security automation and surveillance. We propose a real time system for detection and recognition of individuals in a scene by detecting, recognizing and tracking faces. The system integrates the multi-view face detection algorithm, the multi-pose face recognition algorithm and the extended multi-pose Kalman face tracker. The multi-view face detection algorithm contains the frontal face and profile face detectors which extract the Haar-like features and detect faces at any pose by a cascade of boosted classifiers. The pose of the face is inherently determined from the face detection algorithm and is used in the multi-pose face recognition module where depending on the pose, the detected face is compared with a particular set of trained faces having the same pose range. The pose range of the trained faces is divided into bins onto which the faces are sorted and each bin is trained separately to have its own Eigenspace. The human faces are recognized by projecting them onto a suitable Eigenspace corresponding to the determined pose using Weighted Modular Principal Component Analysis (WMPCA) technique and then, are tracked using the proposed multiple face tracker. This tracker is implemented by extracting suitable face features which are represented by a variant of WMPCA and then tracking these features across the scene using the Kalman filter. This low-level system is created using the same face database of twenty unrelated people trained using WMPCA and classification is performed using a feature correlation metric. This system has the advantage of recognizing and tracking an individual in a cluttered environment with varying pose variations.https://ecommons.udayton.edu/stander_posters/1240/thumbnail.jp

    Fast, collaborative acquisition of multi-view face images using a camera network and its impact on real-time human identification

    Get PDF
    Biometric systems have been typically designed to operate under controlled environments based on previously acquired photographs and videos. But recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans in real-time under unconstrained environments. Distributed camera networks are appropriate for unconstrained scenarios because they can provide multiple views of a scene, thus offering tolerance against variable pose of a human subject and possible occlusions. In dynamic environments, the face images are continually arriving at the base station with different quality, pose and resolution. Designing a fusion strategy poses significant challenges. Such a scenario demands that only the relevant information is processed and the verdict (match / no match) regarding a particular subject is quickly (yet accurately) released so that more number of subjects in the scene can be evaluated.;To address these, we designed a wireless data acquisition system that is capable of acquiring multi-view faces accurately and at a rapid rate. The idea of epipolar geometry is exploited to get high multi-view face detection rates. Face images are labeled to their corresponding poses and are transmitted to the base station. To evaluate the impact of face images acquired using our real-time face image acquisition system on the overall recognition accuracy, we interface it with a face matching subsystem and thus create a prototype real-time multi-view face recognition system. For front face matching, we use the commercial PittPatt software. For non-frontal matching, we use a Local binary Pattern based classifier. Matching scores obtained from both frontal and non-frontal face images are fused for final classification. Our results show significant improvement in recognition accuracy, especially when the front face images are of low resolution

    Effectiveness of Multi-View Face Images and Anthropometric Data In Real-Time Networked Biometrics

    Get PDF
    Over the years, biometric systems have evolved into a reliable mechanism for establishing identity of individuals in the context of applications such as access control, personnel screening and criminal identification. However, recent terror attacks, security threats and intrusion attempts have necessitated a transition to modern biometric systems that can identify humans under unconstrained environments, in real-time. Specifically, the following are three critical transitions that are needed and which form the focus of this thesis: (1) In contrast to operation in an offline mode using previously acquired photographs and videos obtained under controlled environments, it is required that identification be performed in a real-time dynamic mode using images that are continuously streaming in, each from a potentially different view (front, profile, partial profile) and with different quality (pose and resolution). (2) While different multi-modal fusion techniques have been developed to improve system accuracy, these techniques have mainly focused on combining the face biometrics with modalities such as iris and fingerprints that are more reliable but require user cooperation for acquisition. In contrast, the challenge in a real-time networked biometric system is that of combining opportunistically captured multi-view facial images along with soft biometric traits such as height, gait, attire and color that do not require user cooperation. (3) Typical operation is expected to be in an open-set mode where the number of subjects that enrolled in the system is much smaller than the number of probe subjects; yet the system is required to generate high accuracy.;To address these challenges and to make a successful transition to real-time human identification systems, this thesis makes the following contributions: (1) A score-based multi- modal, multi-sample fusion technique is designed to combine face images acquired by a multi-camera network and the effectiveness of opportunistically acquired multi-view face images using a camera network in improving the identification performance is characterized; (2) The multi-view face acquisition system is complemented by a network of Microsoft Kinects for extracting human anthropometric features (specifically height, shoulder width and arm length). The score-fusion technique is augmented to utilize human anthropometric data and the effectiveness of this data is characterized. (3) The performance of the system is demonstrated using a database of 51 subjects collected using the networked biometric data acquisition system.;Our results show improved recognition accuracy when face information from multiple views is utilized for recognition and also indicate that a given level of accuracy can be attained with fewer probe images (lesser time) when compared with a uni-modal biometric system

    A PCA approach to the object constancy for faces using view-based models of the face

    Get PDF
    The analysis of object and face recognition by humans attracts a great deal of interest, mainly because of its many applications in various fields, including psychology, security, computer technology, medicine and computer graphics. The aim of this work is to investigate whether a PCA-based mapping approach can offer a new perspective on models of object constancy for faces in human vision. An existing system for facial motion capture and animation developed for performance-driven animation of avatars is adapted, improved and repurposed to study face representation in the context of viewpoint and lighting invariance. The main goal of the thesis is to develop and evaluate a new approach to viewpoint invariance that is view-based and allows mapping of facial variation between different views to construct a multi-view representation of the face. The thesis describes a computer implementation of a model that uses PCA to generate example- based models of the face. The work explores the joint encoding of expression and viewpoint using PCA and the mapping between viewspecific PCA spaces. The simultaneous, synchronised video recording of 6 views of the face was used to construct multi-view representations, which helped to investigate how well multiple views could be recovered from a single view via the content addressable memory property of PCA. A similar approach was taken to lighting invariance. Finally, the possibility of constructing a multi-view representation from asynchronous view-based data was explored. The results of this thesis have implications for a continuing research problem in computer vision – the problem of recognising faces and objects from different perspectives and in different lighting. It also provides a new approach to understanding viewpoint invariance and lighting invariance in human observers

    Neighborhood Defined Feature Selection Strategy for Improved Face Recognition in Different Sensor Modalitie

    Get PDF
    A novel feature selection strategy for improved face recognition in images with variations due to illumination conditions, facial expressions, and partial occlusions is presented in this dissertation. A hybrid face recognition system that uses feature maps of phase congruency and modular kernel spaces is developed. Phase congruency provides a measure that is independent of the overall magnitude of a signal, making it invariant to variations in image illumination and contrast. A novel modular kernel spaces approach is developed and implemented on the phase congruency feature maps. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The unique modularization procedure developed in this research takes into consideration that the facial variations in a real world scenario are confined to local regions. The additional pixel dependencies that are considered based on their importance help in providing additional information for classification. This procedure also helps in robust localization of the variations, further improving classification accuracy. The effectiveness of the new feature selection strategy has been demonstrated by employing it in two specific applications via face authentication in low resolution cameras and face recognition using multiple sensors (visible and infrared). The face authentication system uses low quality images captured by a web camera. The optical sensor of the web camera is very sensitive to environmental illumination variations. It is observed that the feature selection policy overcomes the facial and environmental variations. A methodology based on multiple training images and clustering is also incorporated to overcome the additional challenges of computational efficiency and the subject\u27s non involvement. A multi-sensor image fusion based face recognition methodology that uses the proposed feature selection technique is presented in this dissertation. Research studies have indicated that complementary information from different sensors helps in improving the recognition accuracy compared to individual modalities. A decision level fusion methodology is also developed which provides better performance compared to individual as well as data level fusion modalities. The new decision level fusion technique is also robust to registration discrepancies, which is a very important factor in operational scenarios. Research work is progressing to use the new face recognition technique in multi-view images by employing independent systems for separate views and integrating the results with an appropriate voting procedure
    • …
    corecore