57,202 research outputs found

    Deep learning for 3D ear detection: A complete pipeline from data generation to segmentation

    Get PDF
    The human ear has distinguishing features that can be used for identification. Automated ear detection from 3D profile face images plays a vital role in ear-based human recognition. This work proposes a complete pipeline including synthetic data generation and ground-truth data labeling for ear detection in 3D point clouds. The ear detection problem is formulated as a semantic part segmentation problem that detects the ear directly in 3D point clouds of profile face data. We introduce EarNet, a modified version of the PointNet++ architecture, and apply rotation augmentation to handle different pose variations in the real data. We demonstrate that PointNet and PointNet++ cannot manage the rotation of a given object without such augmentation. The synthetic 3D profile face data is generated using statistical shape models. In addition, an automatic tool has been developed and is made publicly available to create ground-truth labels of any 3D public data set that includes co-registered 2D images. The experimental results on the real data demonstrate higher localization as compared to existing state-of-the-art approaches

    The effect of time on ear biometrics

    No full text
    We present an experimental study to demonstrate the effect of the time difference in image acquisition for gallery and probe on the performance of ear recognition. This experimental research is the first study on the time effect on ear biometrics. For the purpose of recognition, we convolve banana wavelets with an ear image and then apply local binary pattern on the convolved image. The histograms of the produced image are then used as features to describe an ear. A histogram intersection technique is then applied on the histograms of two ears to measure the ear similarity for the recognition purposes. We also use analysis of variance (ANOVA) to select features to identify the best banana wavelets for the recognition process. The experimental results show that the recognition rate is only slightly reduced by time. The average recognition rate of 98.5% is achieved for an eleven month-difference between gallery and probe on an un-occluded ear dataset of 1491 images of ears selected from Southampton University ear database

    The ear as a biometric

    No full text
    It is more than 10 years since the first tentative experiments in ear biometrics were conducted and it has now reached the “adolescence” of its development towards a mature biometric. Here we present a timely retrospective of the ensuing research since those early days. Whilst its detailed structure may not be as complex as the iris, we show that the ear has unique security advantages over other biometrics. It is most unusual, even unique, in that it supports not only visual and forensic recognition, but also acoustic recognition at the same time. This, together with its deep three-dimensional structure and its robust resistance to change with age will make it very difficult to counterfeit thus ensuring that the ear will occupy a special place in situations requiring a high degree of protection

    UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition

    Full text link
    Recently proposed robust 3D face alignment methods establish either dense or sparse correspondence between a 3D face model and a 2D facial image. The use of these methods presents new challenges as well as opportunities for facial texture analysis. In particular, by sampling the image using the fitted model, a facial UV can be created. Unfortunately, due to self-occlusion, such a UV map is always incomplete. In this paper, we propose a framework for training Deep Convolutional Neural Network (DCNN) to complete the facial UV map extracted from in-the-wild images. To this end, we first gather complete UV maps by fitting a 3D Morphable Model (3DMM) to various multiview image and video datasets, as well as leveraging on a new 3D dataset with over 3,000 identities. Second, we devise a meticulously designed architecture that combines local and global adversarial DCNNs to learn an identity-preserving facial UV completion model. We demonstrate that by attaching the completed UV to the fitted mesh and generating instances of arbitrary poses, we can increase pose variations for training deep face recognition/verification models, and minimise pose discrepancy during testing, which lead to better performance. Experiments on both controlled and in-the-wild UV datasets prove the effectiveness of our adversarial UV completion model. We achieve state-of-the-art verification accuracy, 94.05%94.05\%, under the CFP frontal-profile protocol only by combining pose augmentation during training and pose discrepancy reduction during testing. We will release the first in-the-wild UV dataset (we refer as WildUV) that comprises of complete facial UV maps from 1,892 identities for research purposes

    "'Who are you?' - Learning person specific classifiers from video"

    Get PDF
    We investigate the problem of automatically labelling faces of characters in TV or movie material with their names, using only weak supervision from automaticallyaligned subtitle and script text. Our previous work (Everingham et al. [8]) demonstrated promising results on the task, but the coverage of the method (proportion of video labelled) and generalization was limited by a restriction to frontal faces and nearest neighbour classification. In this paper we build on that method, extending the coverage greatly by the detection and recognition of characters in profile views. In addition, we make the following contributions: (i) seamless tracking, integration and recognition of profile and frontal detections, and (ii) a character specific multiple kernel classifier which is able to learn the features best able to discriminate between the characters. We report results on seven episodes of the TV series “Buffy the Vampire Slayer”, demonstrating significantly increased coverage and performance with respect to previous methods on this material
    corecore