14,214 research outputs found

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    An Evaluation of Score Level Fusion Approaches for Fingerprint and Finger-vein Biometrics

    Get PDF
    Biometric systems have to address many requirements, such as large population coverage, demographic diversity, varied deployment environment, as well as practical aspects like performance and spoofing attacks. Traditional unimodal biometric systems do not fully meet the aforementioned requirements making them vulnerable and susceptible to different types of attacks. In response to that, modern biometric systems combine multiple biometric modalities at different fusion levels. The fused score is decisive to classify an unknown user as a genuine or impostor. In this paper, we evaluate combinations of score normalization and fusion techniques using two modalities (fingerprint and finger-vein) with the goal of identifying which one achieves better improvement rate over traditional unimodal biometric systems. The individual scores obtained from finger-veins and fingerprints are combined at score level using three score normalization techniques (min-max, z-score, hyperbolic tangent) and four score fusion approaches (minimum score, maximum score, simple sum, user weighting). The experimental results proved that the combination of hyperbolic tangent score normalization technique with the simple sum fusion approach achieve the best improvement rate of 99.98%.Comment: 10 pages, 5 figures, 3 tables, conference, NISK 201

    The ear as a biometric

    No full text
    It is more than 10 years since the first tentative experiments in ear biometrics were conducted and it has now reached the ā€œadolescenceā€ of its development towards a mature biometric. Here we present a timely retrospective of the ensuing research since those early days. Whilst its detailed structure may not be as complex as the iris, we show that the ear has unique security advantages over other biometrics. It is most unusual, even unique, in that it supports not only visual and forensic recognition, but also acoustic recognition at the same time. This, together with its deep three-dimensional structure and its robust resistance to change with age will make it very difficult to counterfeit thus ensuring that the ear will occupy a special place in situations requiring a high degree of protection

    Fusion of facial regions using color information in a forensic scenario

    Full text link
    ComunicaciĆ³n presentada en: 18th Iberoamerican Congress on Pattern Recognition, CIARP 2013; Havana; Cuba; 20-23 November 2013The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-41827-3_50This paper reports an analysis of the benefits of using color information on a region-based face recognition system. Three different color spaces are analysed (RGB, YCbCr, lĪ±Ī²) in a very challenging scenario matching good quality mugshot images against video surveillance images. This scenario is of special interest for forensics, where examiners carry out a comparison of two face images using the global information of the faces, but paying special attention to each individual facial region (eyes, nose, mouth, etc.). This work analyses the discriminative power of 15 facial regions comparing both the grayscale and color information. Results show a significant improvement of performance when fusing several regions of the face compared to just using the whole face image. A further improvement of performance is achieved when color information is consideredThis work has been partially supported by contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), bio-Challenge (TEC2009-11186), Bio Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and "CĆ”tedra UAM-TelefĆ³nica

    Speech Processing in Computer Vision Applications

    Get PDF
    Deep learning has been recently proven to be a viable asset in determining features in the field of Speech Analysis. Deep learning methods like Convolutional Neural Networks facilitate the expansion of specific feature information in waveforms, allowing networks to create more feature dense representations of data. Our work attempts to address the problem of re-creating a face given a speaker\u27s voice and speaker identification using deep learning methods. In this work, we first review the fundamental background in speech processing and its related applications. Then we introduce novel deep learning-based methods to speech feature analysis. Finally, we will present our deep learning approaches to speaker identification and speech to face synthesis. The presented method can convert a speaker audio sample to an image of their predicted face. This framework is composed of several chained together networks, each with an essential step in the conversion process. These include Audio embedding, encoding, and face generation networks, respectively. Our experiments show that certain features can map to the face and that with a speaker\u27s voice, DNNs can create their face and that a GUI could be used in conjunction to display a speaker recognition network\u27s data
    • ā€¦
    corecore