2,287 research outputs found

    Low resolution face recognition using a two-branch deep convolutional neural network architecture

    Get PDF
    We propose a novel coupled mappings method for low resolution face recognition using deep convolutional neural networks (DCNNs). The proposed architecture consists of two branches of DCNNs to map the high and low resolution face images into a common space with nonlinear transformations. The branch corresponding to transformation of high resolution images consists of 14 layers and the other branch which maps the low resolution face images to the common space includes a 5-layer super-resolution network connected to a 14-layer network. The distance between the features of corresponding high and low resolution images are backpropagated to train the networks. Our proposed method is evaluated on FERET, LFW, and MBGC datasets and compared with state-of-the-art competing methods. Our extensive experimental evaluations show that the proposed method significantly improves the recognition performance especially for very low resolution probe face images (5% improvement in recognition accuracy). Furthermore, it can reconstruct a high resolution image from its corresponding low resolution probe image which is comparable with the state-of-the-art super-resolution methods in terms of visual quality

    An evaluation of super-resolution for face recognition

    Get PDF
    We evaluate the performance of face recognition algorithms on images at various resolutions. Then we show to what extent super-resolution (SR) methods can improve the recognition performance when comparing low-resolution (LR) to high-resolution (HR) facial images. Our experiments use both synthetic data (from the FRGC v1.0 database) and surveillance images (from the SCface database). Three face recognition methods are used, namely Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Patterns (LBP). Two SR methods are evaluated. The first method learns the mapping between LR images and the corresponding HR images using a regression model. As a result, the reconstructed SR images are close to the HR images that belong to the same subject and far away from others. The second method compares LR and HR facial images without explicitly constructing SR images. It finds a coherent feature space where the correlation of LR and HR is maximum, and then compute the mapping from LR to HR in this feature space. The performance of the two SR methods are compared to that delivered by the standard face recognition without SR. The results show that LDA is mostly robust to resolution changes while LBP is not suitable for the recognition of LR images. SR methods improve the recognition accuracy when downsampled images are used and the first method provides better results than the second one. However, the improvement for realistic LR surveillance images remains limited

    Low-resolution face alignment and recognition using mixed-resolution classifiers

    Get PDF
    A very common case for law enforcement is recognition of suspects from a long distance or in a crowd. This is an important application for low-resolution face recognition (in the authors' case, face region below 40 × 40 pixels in size). Normally, high-resolution images of the suspects are used as references, which will lead to a resolution mismatch of the target and reference images since the target images are usually taken at a long distance and are of low resolution. Most existing methods that are designed to match high-resolution images cannot handle low-resolution probes well. In this study, they propose a novel method especially designed to compare low-resolution images with high-resolution ones, which is based on the log-likelihood ratio (LLR). In addition, they demonstrate the difference in recognition performance between real low-resolution images and images down-sampled from high-resolution ones. Misalignment is one of the most important issues in low-resolution face recognition. Two approaches - matching-score-based registration and extended training of images with various alignments - are introduced to handle the alignment problem. Their experiments on real low-resolution face databases show that their methods outperform the state-of-the-art

    UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition

    Full text link
    Advances in image restoration and enhancement techniques have led to discussion about how such algorithmscan be applied as a pre-processing step to improve automatic visual recognition. In principle, techniques like deblurring and super-resolution should yield improvements by de-emphasizing noise and increasing signal in an input image. But the historically divergent goals of the computational photography and visual recognition communities have created a significant need for more work in this direction. To facilitate new research, we introduce a new benchmark dataset called UG^2, which contains three difficult real-world scenarios: uncontrolled videos taken by UAVs and manned gliders, as well as controlled videos taken on the ground. Over 160,000 annotated frames forhundreds of ImageNet classes are available, which are used for baseline experiments that assess the impact of known and unknown image artifacts and other conditions on common deep learning-based object classification approaches. Further, current image restoration and enhancement techniques are evaluated by determining whether or not theyimprove baseline classification performance. Results showthat there is plenty of room for algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset: https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or

    People identification and tracking through fusion of facial and gait features

    Get PDF
    This paper reviews the contemporary (face, gait, and fusion) computational approaches for automatic human identification at a distance. For remote identification, there may exist large intra-class variations that can affect the performance of face/gait systems substantially. First, we review the face recognition algorithms in light of factors, such as illumination, resolution, blur, occlusion, and pose. Then we introduce several popular gait feature templates, and the algorithms against factors such as shoe, carrying condition, camera view, walking surface, elapsed time, and clothing. The motivation of fusing face and gait, is that, gait is less sensitive to the factors that may affect face (e.g., low resolution, illumination, facial occlusion, etc.), while face is robust to the factors that may affect gait (walking surface, clothing, etc.). We review several most recent face and gait fusion methods with different strategies, and the significant performance gains suggest these two modality are complementary for human identification at a distance

    People identification and tracking through fusion of facial and gait features

    Get PDF
    This paper reviews the contemporary (face, gait, and fusion) computational approaches for automatic human identification at a distance. For remote identification, there may exist large intra-class variations that can affect the performance of face/gait systems substantially. First, we review the face recognition algorithms in light of factors, such as illumination, resolution, blur, occlusion, and pose. Then we introduce several popular gait feature templates, and the algorithms against factors such as shoe, carrying condition, camera view, walking surface, elapsed time, and clothing. The motivation of fusing face and gait, is that, gait is less sensitive to the factors that may affect face (e.g., low resolution, illumination, facial occlusion, etc.), while face is robust to the factors that may affect gait (walking surface, clothing, etc.). We review several most recent face and gait fusion methods with different strategies, and the significant performance gains suggest these two modality are complementary for human identification at a distance
    corecore