1,489 research outputs found

    Biometric Authentication System on Mobile Personal Devices

    Get PDF
    We propose a secure, robust, and low-cost biometric authentication system on the mobile personal device for the personal network. The system consists of the following five key modules: 1) face detection; 2) face registration; 3) illumination normalization; 4) face verification; and 5) information fusion. For the complicated face authentication task on the devices with limited resources, the emphasis is largely on the reliability and applicability of the system. Both theoretical and practical considerations are taken. The final system is able to achieve an equal error rate of 2% under challenging testing protocols. The low hardware and software cost makes the system well adaptable to a large range of security applications

    Fair comparison of skin detection approaches on publicly available datasets

    Full text link
    Skin detection is the process of discriminating skin and non-skin regions in a digital image and it is widely used in several applications ranging from hand gesture analysis to track body parts and face detection. Skin detection is a challenging problem which has drawn extensive attention from the research community, nevertheless a fair comparison among approaches is very difficult due to the lack of a common benchmark and a unified testing protocol. In this work, we investigate the most recent researches in this field and we propose a fair comparison among approaches using several different datasets. The major contributions of this work are an exhaustive literature review of skin color detection approaches, a framework to evaluate and combine different skin detector approaches, whose source code is made freely available for future research, and an extensive experimental comparison among several recent methods which have also been used to define an ensemble that works well in many different problems. Experiments are carried out in 10 different datasets including more than 10000 labelled images: experimental results confirm that the best method here proposed obtains a very good performance with respect to other stand-alone approaches, without requiring ad hoc parameter tuning. A MATLAB version of the framework for testing and of the methods proposed in this paper will be freely available from https://github.com/LorisNann

    3D FACE RECOGNITION USING LOCAL FEATURE BASED METHODS

    Get PDF
    Face recognition has attracted many researchers’ attention compared to other biometrics due to its non-intrusive and friendly nature. Although several methods for 2D face recognition have been proposed so far, there are still some challenges related to the 2D face including illumination, pose variation, and facial expression. In the last few decades, 3D face research area has become more interesting since shape and geometry information are used to handle challenges from 2D faces. Existing algorithms for face recognition are divided into three different categories: holistic feature-based, local feature-based, and hybrid methods. According to the literature, local features have shown better performance relative to holistic feature-based methods under expression and occlusion challenges. In this dissertation, local feature-based methods for 3D face recognition have been studied and surveyed. In the survey, local methods are classified into three broad categories which consist of keypoint-based, curve-based, and local surface-based methods. Inspired by keypoint-based methods which are effective to handle partial occlusion, structural context descriptor on pyramidal shape maps and texture image has been proposed in a multimodal scheme. Score-level fusion is used to combine keypoints’ matching score in both texture and shape modalities. The survey shows local surface-based methods are efficient to handle facial expression. Accordingly, a local derivative pattern is introduced to extract distinct features from depth map in this work. In addition, the local derivative pattern is applied on surface normals. Most 3D face recognition algorithms are focused to utilize the depth information to detect and extract features. Compared to depth maps, surface normals of each point can determine the facial surface orientation, which provides an efficient facial surface representation to extract distinct features for recognition task. An Extreme Learning Machine (ELM)-based auto-encoder is used to make the feature space more discriminative. Expression and occlusion robust analysis using the information from the normal maps are investigated by dividing the facial region into patches. A novel hybrid classifier is proposed to combine Sparse Representation Classifier (SRC) and ELM classifier in a weighted scheme. The proposed algorithms have been evaluated on four widely used 3D face databases; FRGC, Bosphorus, Bu-3DFE, and 3D-TEC. The experimental results illustrate the effectiveness of the proposed approaches. The main contribution of this work lies in identification and analysis of effective local features and a classification method for improving 3D face recognition performance

    People identification and tracking through fusion of facial and gait features

    Get PDF
    This paper reviews the contemporary (face, gait, and fusion) computational approaches for automatic human identification at a distance. For remote identification, there may exist large intra-class variations that can affect the performance of face/gait systems substantially. First, we review the face recognition algorithms in light of factors, such as illumination, resolution, blur, occlusion, and pose. Then we introduce several popular gait feature templates, and the algorithms against factors such as shoe, carrying condition, camera view, walking surface, elapsed time, and clothing. The motivation of fusing face and gait, is that, gait is less sensitive to the factors that may affect face (e.g., low resolution, illumination, facial occlusion, etc.), while face is robust to the factors that may affect gait (walking surface, clothing, etc.). We review several most recent face and gait fusion methods with different strategies, and the significant performance gains suggest these two modality are complementary for human identification at a distance

    Human face recognition under degraded conditions

    Get PDF
    Comparative studies on the state of the art feature extraction and classification techniques for human face recognition under low resolution problem, are proposed in this work. Also, the effect of applying resolution enhancement, using interpolation techniques, is evaluated. A gradient-based illumination insensitive preprocessing technique is proposed using the ratio between the gradient magnitude and the current intensity level of image which is insensitive against severe level of lighting effect. Also, a combination of multi-scale Weber analysis and enhanced DD-DT-CWT is demonstrated to have a noticeable stability versus illumination variation. Moreover, utilization of the illumination insensitive image descriptors on the preprocessed image leads to further robustness against lighting effect. The proposed block-based face analysis decreases the effect of occlusion by devoting different weights to the image subblocks, according to their discrimination power, in the score or decision level fusion. In addition, a hierarchical structure of global and block-based techniques is proposed to improve the recognition accuracy when different image degraded conditions occur. Complementary performance of global and local techniques leads to considerable improvement in the face recognition accuracy. Effectiveness of the proposed algorithms are evaluated on Extended Yale B, AR, CMU Multi-PIE, LFW, FERET and FRGC databases with large number of images under different degradation conditions. The experimental results show an improved performance under poor illumination, facial expression and, occluded images

    People identification and tracking through fusion of facial and gait features

    Get PDF
    This paper reviews the contemporary (face, gait, and fusion) computational approaches for automatic human identification at a distance. For remote identification, there may exist large intra-class variations that can affect the performance of face/gait systems substantially. First, we review the face recognition algorithms in light of factors, such as illumination, resolution, blur, occlusion, and pose. Then we introduce several popular gait feature templates, and the algorithms against factors such as shoe, carrying condition, camera view, walking surface, elapsed time, and clothing. The motivation of fusing face and gait, is that, gait is less sensitive to the factors that may affect face (e.g., low resolution, illumination, facial occlusion, etc.), while face is robust to the factors that may affect gait (walking surface, clothing, etc.). We review several most recent face and gait fusion methods with different strategies, and the significant performance gains suggest these two modality are complementary for human identification at a distance

    Emotion Recognition based on Multimodal Information

    Get PDF
    corecore