5,838 research outputs found

    Feature-domain super-resolution framework for Gabor-based face and iris recognition

    Get PDF
    The low resolution of images has been one of the major limitations in recognising humans from a distance using their biometric traits, such as face and iris. Superresolution has been employed to improve the resolution and the recognition performance simultaneously, however the majority of techniques employed operate in the pixel domain, such that the biometric feature vectors are extracted from a super-resolved input image. Feature-domain superresolution has been proposed for face and iris, and is shown to further improve recognition performance by capitalising on direct super-resolving the features which are used for recognition. However, current feature-domain superresolution approaches are limited to simple linear features such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), which are not the most discriminant features for biometrics. Gabor-based features have been shown to be one of the most discriminant features for biometrics including face and iris. This paper proposes a framework to conduct super-resolution in the non-linear Gabor feature domain to further improve the recognition performance of biometric systems. Experiments have confirmed the validity of the proposed approach, demonstrating superior performance to existing linear approaches for both face and iris biometrics

    A Survey of Super-Resolution in Iris Biometrics With Evaluation of Dictionary-Learning

    Full text link
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches thus need to incorporate the specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an eigen-patches’ reconstruction method based on the principal component analysis eigen-transformation of local image patches. The structure of the iris is exploited by building a patch-position-dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded the high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15 × 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators that were used to carry out biometric verification and identification experiments. The experimental results show that the proposed method significantly outperforms both the bilinear and bicubic interpolations at a very low resolution. The performance of a number of comparators attains an impressive equal error rate as low as 5% and a Top-1 accuracy of 77%–84% when considering the iris images of only 15 × 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matchingThis work was supported by the EU COST Action under Grant IC1106. The work of F. Alonso-Fernandez and J. Bigun was supported in part by the Swedish Research Council, in part by the Swedish Innovation Agency, and in part by the Swedish Knowledge Foundation through the CAISR/SIDUS-AIR projects. The work of J. Fierrez was supported by the Spanish MINECO/FEDER through the CogniMetrics Project under Grant TEC2015-70627-R. The authors acknowledge the Halmstad University Library for its support with the open access fee

    A Computer Vision Story on Video Sequences::From Face Detection to Face Super- Resolution using Face Quality Assessment

    Get PDF

    Super-resolution:A comprehensive survey

    Get PDF

    A manifold approach to face recognition from low quality video across illumination and pose using implicit super-resolution

    Full text link
    We consider the problem of matching a face in a low resolution query video sequence against a set of higher quality gallery sequences. This problem is of interest in many applications, such as law enforcement. Our main contribution is an extension of the recently proposed Generic Shape-Illumination Manifold (gSIM) framework. Specifically, (i) we show how super-resolution across pose and scale can be achieved implicitly, by off-line learning of subsampling artefacts; (ii) we use this result to propose an extension to the statistical model of the gSIM by compounding it with a hierarchy of subsampling models at multiple scales; and (iii) we describe an extensive empirical evaluation of the method on over 1300 video sequences – we first measure the degradation in performance of the original gSIM algorithm as query sequence resolution is decreased and then show that the proposed extension produces an error reduction in the mean recognition rate of over 50%

    Generative Adversarial Network and Its Application in Aerial Vehicle Detection and Biometric Identification System

    Get PDF
    In recent years, generative adversarial networks (GANs) have shown great potential in advancing the state-of-the-art in many areas of computer vision, most notably in image synthesis and manipulation tasks. GAN is a generative model which simultaneously trains a generator and a discriminator in an adversarial manner to produce real-looking synthetic data by capturing the underlying data distribution. Due to its powerful ability to generate high-quality and visually pleasingresults, we apply it to super-resolution and image-to-image translation techniques to address vehicle detection in low-resolution aerial images and cross-spectral cross-resolution iris recognition. First, we develop a Multi-scale GAN (MsGAN) with multiple intermediate outputs, which progressively learns the details and features of the high-resolution aerial images at different scales. Then the upscaled super-resolved aerial images are fed to a You Only Look Once-version 3 (YOLO-v3) object detector and the detection loss is jointly optimized along with a super-resolution loss to emphasize target vehicles sensitive to the super-resolution process. There is another problem that remains unsolved when detection takes place at night or in a dark environment, which requires an IR detector. Training such a detector needs a lot of infrared (IR) images. To address these challenges, we develop a GAN-based joint cross-modal super-resolution framework where low-resolution (LR) IR images are translated and super-resolved to high-resolution (HR) visible (VIS) images before applying detection. This approach significantly improves the accuracy of aerial vehicle detection by leveraging the benefits of super-resolution techniques in a cross-modal domain. Second, to increase the performance and reliability of deep learning-based biometric identification systems, we focus on developing conditional GAN (cGAN) based cross-spectral cross-resolution iris recognition and offer two different frameworks. The first approach trains a cGAN to jointly translate and super-resolve LR near-infrared (NIR) iris images to HR VIS iris images to perform cross-spectral cross-resolution iris matching to the same resolution and within the same spectrum. In the second approach, we design a coupled GAN (cpGAN) architecture to project both VIS and NIR iris images into a low-dimensional embedding domain. The goal of this architecture is to ensure maximum pairwise similarity between the feature vectors from the two iris modalities of the same subject. We have also proposed a pose attention-guided coupled profile-to-frontal face recognition network to learn discriminative and pose-invariant features in an embedding subspace. To show that the feature vectors learned by this deep subspace can be used for other tasks beyond recognition, we implement a GAN architecture which is able to reconstruct a frontal face from its corresponding profile face. This capability can be used in various face analysis tasks, such as emotion detection and expression tracking, where having a frontal face image can improve accuracy and reliability. Overall, our research works have shown its efficacy by achieving new state-of-the-art results through extensive experiments on publicly available datasets reported in the literature
    • …
    corecore