2,006 research outputs found

    Postmortem iris recognition and its application in human identification

    Full text link
    Iris recognition is a validated and non-invasive human identification technology currently implemented for the purposes of surveillance and security (i.e. border control, schools, military). Similar to deoxyribonucleic acid (DNA), irises are a highly individualizing component of the human body. Based on a lack of genetic penetrance, irises are unique between an individualā€™s left and right iris and between identical twins, proving to be more individualizing than DNA. At this time, little to no research has been conducted on the use of postmortem iris scanning as a biometric measurement of identification. The purpose of this pilot study is to explore the use of iris recognition as a tool for postmortem identification. Objectives of the study include determining whether current iris recognition technology can locate and detect iris codes in postmortem globes, and if iris scans collected at different postmortem time intervals can be identified as the same iris initially enrolled. Data from 43 decedents involving 148 subsequent iris scans demonstrated a subsequent match rate of approximately 80%, supporting the theory that iris recognition technology is capable of detecting and identifying an individualā€™s iris code in a postmortem setting. A chi-square test of independence showed no significant difference between match outcomes and the globe scanned (left vs. right), and gender had no bearing on the match outcome. There was a significant relationship between iris color and match outcome, with blue/gray eyes yielding a lower match rate (59%) compared to brown (82%) or green/hazel eyes (88%), however, the sample size of blue/gray eyes in this study was not large enough to draw a meaningful conclusion. An isolated case involving an antemortem initial scan collected from an individual on life support yielded an accurate identification (match) with a subsequent scan captured at approximately 10 hours postmortem. Falsely rejected subsequent iris scans or "no match" results occurred in about 20% of scans; they were observed at each PMI range and varied from 19-30%. The false reject rate is too high to reliably establish non-identity when used alone and ideally would be significantly lower prior to implementation in a forensic setting; however, a "no match" could be confirmed using another method. Importantly, the data showed a false match rate or false accept rate (FAR) of zero, a result consistent with previous iris recognition studies in living individuals. The preliminary results of this pilot study demonstrate a plausible role for iris recognition in postmortem human identification. Implementation of a universal iris recognition database would benefit the medicolegal death investigation and forensic pathology communities, and has potential applications to other situations such as missing persons and human trafficking cases

    EMPATH: A Neural Network that Categorizes Facial Expressions

    Get PDF
    There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain

    A Survey on Ear Biometrics

    No full text
    Recognizing people by their ear has recently received significant attention in the literature. Several reasons account for this trend: first, ear recognition does not suffer from some problems associated with other non contact biometrics, such as face recognition; second, it is the most promising candidate for combination with the face in the context of multi-pose face recognition; and third, the ear can be used for human recognition in surveillance videos where the face may be occluded completely or in part. Further, the ear appears to degrade little with age. Even though, current ear detection and recognition systems have reached a certain level of maturity, their success is limited to controlled indoor conditions. In addition to variation in illumination, other open research problems include hair occlusion; earprint forensics; ear symmetry; ear classification; and ear individuality. This paper provides a detailed survey of research conducted in ear detection and recognition. It provides an up-to-date review of the existing literature revealing the current state-of-art for not only those who are working in this area but also for those who might exploit this new approach. Furthermore, it offers insights into some unsolved ear recognition problems as well as ear databases available for researchers

    ORGAN LOCALIZATION AND DETECTION IN SOWā€™S USING MACHINE LEARNING AND DEEP LEARNING IN COMPUTER VISION

    Get PDF
    The objective of computer vision research is to endow computers with human-like perception to enable the capability to detect their surroundings, interpret the data they sense, take appropriate actions, and learn from their experiences to improve future performance. The area has progressed from using traditional pattern recognition and image processing technologies to advanced techniques in image understanding such as model-based and knowledge-based vision. In the past few years there has been a surge of interest in machine learning algorithms for computer vision-based applications. Machine learning technology has the potential to significantly contribute to the development of flexible and robust vision algorithms that will improve the performance of practical vision systems with a higher level of competence and greater generality. Additionally, the development of machine learning-based architectures has the potential to reduce system development time while simultaneously achieving the above-stated performance improvements. This work proposes the utilization of a computer vision-based approach that leverages machine and deep learning systems to aid the detection and identification of sow reproduction cycles by segmentation and object detection techniques. A lightweight machine learning system is proposed for object detection to address dataset collection issues in one of the most crucial and potentially lucrative farming applications. This technique was designed to detect the vulvae region in pre-estrous sows using a single thermal image. In the first experiment, the support vector machine (SVM) classifier was used after extracting features determined by 12 Gabor filters. The features are then concatenated with the features obtained from the Histogram of oriented gradients (HOG) to produce the results of the first experiment. In the second experiment, the number of distinct Gabor filters used was increased from 12 to 96. The system is trained on cropped image windows and uses the Gaussian pyramid technique to look for the vulva in the input image. The resulting process is shown to be lightweight, simple, and robust when applied to and evaluated on a large number of images. The results from extensive qualitative and quantitative testing experiments are included. The experimental results include false detection, missing detection and favorable detection rates. The results indicate state-of-the-art accuracy. Additionally, the project was expanded by utilizing the You Only Look Once (YOLO) deep learning Object Detection models for fast object detection. The results from object detection have been used to label images for segmentation. The bounding box from the detected area was systematically colored to achieve the segmented and labeled images. Then these segmented images are used as custom data to train U-Net segmentation. The first step involves building a machine learning model using Gabor filters and HOG for feature extraction and SVM for classification. The results discovered the deficiency of the model, therefore a second stage was suggested in which the dataset was trained using YOLOv3-dependent deep learning object detection. The resulting segmentation model is found to be the best choice to aid the process of vulva localization. Since the model depends on the original gray-scale image and the mask of the region of interest (ROI), a custom dataset containing these features was obtained, augmented, and used to train a U-Net segmentation model. The results of the final approach shows that the proposed system can segment sow\u27s vulva region even in low rank images and has an excellent performance efficiency. Furthermore, the resulting algorithm can be used to improve the automation of estrous detection by providing reliable ROI identification and segmentation and enabling beneficial temporal change detection and tracking in future efforts

    Empirical mode decomposition-based facial pose estimation inside video sequences

    Get PDF
    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions

    A Self-Organizing Neural System for Learning to Recognize Textured Scenes

    Full text link
    A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well a.s abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is benchmarked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
    • ā€¦
    corecore