1,913 research outputs found

    A Benchmark for Iris Location and a Deep Learning Detector Evaluation

    Full text link
    The iris is considered as the biometric trait with the highest unique probability. The iris location is an important task for biometrics systems, affecting directly the results obtained in specific applications such as iris recognition, spoofing and contact lenses detection, among others. This work defines the iris location problem as the delimitation of the smallest squared window that encompasses the iris region. In order to build a benchmark for iris location we annotate (iris squared bounding boxes) four databases from different biometric applications and make them publicly available to the community. Besides these 4 annotated databases, we include 2 others from the literature. We perform experiments on these six databases, five obtained with near infra-red sensors and one with visible light sensor. We compare the classical and outstanding Daugman iris location approach with two window based detectors: 1) a sliding window detector based on features from Histogram of Oriented Gradients (HOG) and a linear Support Vector Machines (SVM) classifier; 2) a deep learning based detector fine-tuned from YOLO object detector. Experimental results showed that the deep learning based detector outperforms the other ones in terms of accuracy and runtime (GPUs version) and should be chosen whenever possible.Comment: Accepted for presentation at the International Joint Conference on Neural Networks (IJCNN) 201

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation

    On Detecting Faces And Classifying Facial Races With Partial Occlusions And Pose Variations

    Get PDF
    In this dissertation, we present our contributions in face detection and facial race classification. Face detection in unconstrained images is a traditional problem in computer vision community. Challenges still remain. In particular, the detection of partially occluded faces with pose variations has not been well addressed. In the first part of this dissertation, our contributions are three-fold. First, we introduce our four image datasets consisting of large-scale labeled face dataset, noisy large-scale labeled non-face dataset, CrowdFaces dataset, and CrowdNonFaces dataset intended to be used for face detection training. Second, we improve Viola-Jones (VJ) face detection results by first training a Convolutional Neural Network (CNN) model on our noisy datasets. We show our improvement over the VJ face detector on AFW face detection benchmark dataset. However, existing partial occluded face detection methods require training several models, computing hand-crafted features, or both. Hence, we thirdly propose our Large-Scale Deep Learning (LSDL), a method that does not require training several CNN models or hand-crafted features computations to detect faces. Our LSDL face detector is trained on a single CNN model to detect unconstrained multi-view partially occluded and non-partially occluded faces. The model is trained with a large number of face training examples that cover most partial occlusions and non-partial occlusions facial appearances. The LSDL face detection method is achieved by selecting detection windows with the highest confidence scores using a threshold. Our evaluation results show that our LSDL method achieves the best performance on AFW dataset and a comparable performance on FDDB dataset among state-of-the-art face detection methods without manually extending or adjusting the square detection bounding boxes. Many biometrics and security systems use facial information to obtain an individual identification and recognition. Classifying a race from a face image can provide a strong hint to search for facial identity and criminal identification. Current facial race classification methods are confined only to constrained non-partially occluded frontal faces. Challenges remain under unconstrained environments such as partial occlusions and pose variations, low illuminations, and small scales. In the second part of the dissertation, we propose a CNN model to classify facial races with partial occlusions and pose variations. The proposed model is trained using a broad and balanced racial distributed face image dataset. The model is trained on four major human races, Caucasian, Indian, Mongolian, and Negroid. Our model is evaluated against the state-of-the-art methods on a constrained face test dataset. Also, an evaluation of the proposed model and human performance is conducted and compared on our new unconstrained facial race benchmark (CIMN) dataset. Our results show that our model achieves 95.1% of race classification accuracy in the constrained environment. Furthermore, the model achieves a comparable accuracy of race classification compared to human performance on the current challenges in the unconstrained environment

    Generative Adversarial Network and Its Application in Aerial Vehicle Detection and Biometric Identification System

    Get PDF
    In recent years, generative adversarial networks (GANs) have shown great potential in advancing the state-of-the-art in many areas of computer vision, most notably in image synthesis and manipulation tasks. GAN is a generative model which simultaneously trains a generator and a discriminator in an adversarial manner to produce real-looking synthetic data by capturing the underlying data distribution. Due to its powerful ability to generate high-quality and visually pleasingresults, we apply it to super-resolution and image-to-image translation techniques to address vehicle detection in low-resolution aerial images and cross-spectral cross-resolution iris recognition. First, we develop a Multi-scale GAN (MsGAN) with multiple intermediate outputs, which progressively learns the details and features of the high-resolution aerial images at different scales. Then the upscaled super-resolved aerial images are fed to a You Only Look Once-version 3 (YOLO-v3) object detector and the detection loss is jointly optimized along with a super-resolution loss to emphasize target vehicles sensitive to the super-resolution process. There is another problem that remains unsolved when detection takes place at night or in a dark environment, which requires an IR detector. Training such a detector needs a lot of infrared (IR) images. To address these challenges, we develop a GAN-based joint cross-modal super-resolution framework where low-resolution (LR) IR images are translated and super-resolved to high-resolution (HR) visible (VIS) images before applying detection. This approach significantly improves the accuracy of aerial vehicle detection by leveraging the benefits of super-resolution techniques in a cross-modal domain. Second, to increase the performance and reliability of deep learning-based biometric identification systems, we focus on developing conditional GAN (cGAN) based cross-spectral cross-resolution iris recognition and offer two different frameworks. The first approach trains a cGAN to jointly translate and super-resolve LR near-infrared (NIR) iris images to HR VIS iris images to perform cross-spectral cross-resolution iris matching to the same resolution and within the same spectrum. In the second approach, we design a coupled GAN (cpGAN) architecture to project both VIS and NIR iris images into a low-dimensional embedding domain. The goal of this architecture is to ensure maximum pairwise similarity between the feature vectors from the two iris modalities of the same subject. We have also proposed a pose attention-guided coupled profile-to-frontal face recognition network to learn discriminative and pose-invariant features in an embedding subspace. To show that the feature vectors learned by this deep subspace can be used for other tasks beyond recognition, we implement a GAN architecture which is able to reconstruct a frontal face from its corresponding profile face. This capability can be used in various face analysis tasks, such as emotion detection and expression tracking, where having a frontal face image can improve accuracy and reliability. Overall, our research works have shown its efficacy by achieving new state-of-the-art results through extensive experiments on publicly available datasets reported in the literature
    • …
    corecore