42,611 research outputs found

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    Probit models for capture-recapture data subject to imperfect detection, individual heterogeneity and misidentification

    Get PDF
    As noninvasive sampling techniques for animal populations have become more popular, there has been increasing interest in the development of capture-recapture models that can accommodate both imperfect detection and misidentification of individuals (e.g., due to genotyping error). However, current methods do not allow for individual variation in parameters, such as detection or survival probability. Here we develop misidentification models for capture-recapture data that can simultaneously account for temporal variation, behavioral effects and individual heterogeneity in parameters. To facilitate Bayesian inference using our approach, we extend standard probit regression techniques to latent multinomial models where the dimension and zeros of the response cannot be observed. We also present a novel Metropolis-Hastings within Gibbs algorithm for fitting these models using Markov chain Monte Carlo. Using closed population abundance models for illustration, we re-visit a DNA capture-recapture population study of black bears in Michigan, USA and find evidence of misidentification due to genotyping error, as well as temporal, behavioral and individual variation in detection probability. We also estimate a salamander population of known size from laboratory experiments evaluating the effectiveness of a marking technique commonly used for amphibians and fish. Our model was able to reliably estimate the size of this population and provided evidence of individual heterogeneity in misidentification probability that is attributable to variable mark quality. Our approach is more computationally demanding than previously proposed methods, but it provides the flexibility necessary for a much broader suite of models to be explored while properly accounting for uncertainty introduced by misidentification and imperfect detection. In the absence of misidentification, our probit formulation also provides a convenient and efficient Gibbs sampler for Bayesian analysis of traditional closed population capture-recapture data.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS783 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On Shape-Mediated Enrolment in Ear Biometrics

    No full text
    Ears are a new biometric with major advantage in that they appear to maintain their shape with increased age. Any automatic biometric system needs enrolment to extract the target area from the background. In ear biometrics the inputs are often human head profile images. Furthermore ear biometrics is concerned with the effects of partial occlusion mostly caused by hair and earrings. We propose an ear enrolment algorithm based on finding the elliptical shape of the ear using a Hough Transform (HT) accruing tolerance to noise and occlusion. Robustness is improved further by enforcing some prior knowledge. We assess our enrolment on two face profile datasets; as well as synthetic occlusion

    ėŒģ—°ė³€ģ“ ģ“ˆķŒŒė¦¬ ķ„ø ģ˜ģƒģ„ ģ“ģš©ķ•œ ģœ ģ „ė…ģ„±ģ˜ ģžė™ķ™”ėœ ķ‰ź°€

    Get PDF
    ķ•™ģœ„ė…¼ė¬ø (ė°•ģ‚¬)-- ģ„œģšøėŒ€ķ•™źµ ėŒ€ķ•™ģ› : ķ˜‘ė™ź³¼ģ • ź³„ģ‚°ź³¼ķ•™ģ „ź³µ, 2015. 2. ź°•ėŖ…ģ£¼.In this work, the SMART assay, a genotoxicity test using mutant Drosophila hairs, was automated. The SMART assay assesses the genotoxicity of a chemical compound by counting mutant hairs on the Drosophila's wings. Even though the Drosophila has many advantages in cost and ethnic problems, the speed and accuracy are limited, since the counting is manual. So far, no research has been given to the automation of SMART assay. For the first time, an automated image analysis system that counts the mutant hairs automatically was developed in this work. The automation consists of four parts: image acquisition, image preprocessing, hair detection, and mutant classification. In each part, new automation methods are proposed. In the image acquisition, a wing detection method using ellipse detection and an optimizing method for image acquisition using the wing detection are proposed. In the image preprocessing, the hair image separation into upper and lower using a wind surface reconstruction is proposed, and the hair area segmentation is proposed. In hair detection, a line fitting method in 3D is proposed. In mutant classification, a upper and lower classification and mutant classification using the wing surface are proposed. The proposed system is validated using the proposed automatic matching system. The genotoxicity of the automated SMART assay coincides with that of the original manual SMART assay.Abstract Chapter 1 Introduction Chapter 2 Wing Slide Preparation 2.1 Compounds 2.2 Phenotypes 2.3 Culturing Conditions Chapter 3 Image Acquisition 3.1 Multi-focussed Image Stack 3.2 Multi-position Image Slide Chapter 4 Image Preprocessing 4.1 Wing Surface Reconstruction 4.2 Hair Region Segmentation Chapter 5 Hair Detection 5.1 Line Detection Methods 5.2 Ellipse Detection Method 5.3 Hemi-ellipsoid Fitting Method 5.4 Line Fitting Method in 2D 5.5 Line Fitting Method in 3D Chapter 6 Classification 6.1 Upper and Lower Classification 6.2 Mutant Hair Classification 6.3 Genotoxicity Decision Chapter 7 Verification Chapter 8 Conclusion References Abstract (in Korean)Docto

    Hand2Face: Automatic Synthesis and Recognition of Hand Over Face Occlusions

    Full text link
    A person's face discloses important information about their affective state. Although there has been extensive research on recognition of facial expressions, the performance of existing approaches is challenged by facial occlusions. Facial occlusions are often treated as noise and discarded in recognition of affective states. However, hand over face occlusions can provide additional information for recognition of some affective states such as curiosity, frustration and boredom. One of the reasons that this problem has not gained attention is the lack of naturalistic occluded faces that contain hand over face occlusions as well as other types of occlusions. Traditional approaches for obtaining affective data are time demanding and expensive, which limits researchers in affective computing to work on small datasets. This limitation affects the generalizability of models and deprives researchers from taking advantage of recent advances in deep learning that have shown great success in many fields but require large volumes of data. In this paper, we first introduce a novel framework for synthesizing naturalistic facial occlusions from an initial dataset of non-occluded faces and separate images of hands, reducing the costly process of data collection and annotation. We then propose a model for facial occlusion type recognition to differentiate between hand over face occlusions and other types of occlusions such as scarves, hair, glasses and objects. Finally, we present a model to localize hand over face occlusions and identify the occluded regions of the face.Comment: Accepted to International Conference on Affective Computing and Intelligent Interaction (ACII), 201

    The ear as a biometric

    No full text
    It is more than 10 years since the first tentative experiments in ear biometrics were conducted and it has now reached the ā€œadolescenceā€ of its development towards a mature biometric. Here we present a timely retrospective of the ensuing research since those early days. Whilst its detailed structure may not be as complex as the iris, we show that the ear has unique security advantages over other biometrics. It is most unusual, even unique, in that it supports not only visual and forensic recognition, but also acoustic recognition at the same time. This, together with its deep three-dimensional structure and its robust resistance to change with age will make it very difficult to counterfeit thus ensuring that the ear will occupy a special place in situations requiring a high degree of protection

    Reconstructive Sparse Code Transfer for Contour Detection and Semantic Labeling

    Get PDF
    We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.Comment: to appear in Asian Conference on Computer Vision (ACCV), 201
    • ā€¦
    corecore