258 research outputs found

    A Longitudinal Analysis on the Feasibility of Iris Recognition Performance for Infants 0-2 Years Old

    Get PDF
    The focus of this study was to longitudinally evaluate iris recognition for infants between the ages of 0 to 2 years old. Image quality metrics of infant and adult irises acquired on the same iris camera were compared. Matching performance was evaluated for four groups, infants 0 to 6 months, 7 to 12 months, 13 to 24 months, and adults. A mixed linear regression model was used to determine if infants’ genuine similarity scores changed over time. This study found that image quality metrics were different between infants and adults but in the older group, (13 to 24 months old) the image quality metric scores were more likely to be similar to adults. Infants 0 to 6 months old had worse performance at an FMR of 0.01% than infants 7 to 12 months, 13 to 24 months, and adults

    A Computer Vision Story on Video Sequences::From Face Detection to Face Super- Resolution using Face Quality Assessment

    Get PDF

    A Multimodal Biometric Authentication for Smartphones

    Get PDF
    Title from PDF of title page, viewed on October 18, 2016Dissertation advisor: Reza DerakhshaniVitaIncludes bibliographical references (pages 119-127)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2015Biometrics is seen as a viable solution to ageing password based authentication on smartphones. Fingerprint biometric is leading the biometric technology for smartphones, however, owing to its high cost, major players in mobile industry are introducing fingerprint sensors only on their flagship devices, leaving most of their other devices without a fingerprint sensor. Cameras on the other hand have been seeing a constant upgrade in sensor and supporting hardware, courtesy of ‘selfies’ on all smartphones. Face, iris and visible vasculature are three biometric traits that can be captured in visible spectrum using existing cameras on smartphone. Current biometric recognition systems on smartphones rely on a single biometric trait for faster authentication thereby increasing the probability of failure to enroll, affecting the usability of the biometric system for practical purposes. While multibiometric system mitigates this problem, computational models for multimodal biometrics recognition on smartphones have scarcely been studied. This dissertation provides a practical multimodal biometric solution for existing smartphones using iris, periocular and eye vasculature biometrics. In this work, computational methods for quality analysis and feature detection of biometric data that are suitable for deployment on smartphones have been introduced. A fast, efficient feature detection algorithm (Vascular Point Detector) for identifying interest points on images garnered from both rear and front facing camera has been developed. It was observed that the retention ratio of VPD for final similarity score calculation was at least 10% higher than state of art interest point detectors such as FAST, over various datasets. An interest point suppression algorithm based on local histograms was introduced, reducing the computational footprint of matching algorithm by at least 30%. Further, experiments are presented which successfully combine multiple samples of eye vasculature, iris and periocular biometrics obtained from a single smartphone camera sensor. Several methods are explored to test the effectiveness of multi-modal and multi algorithm fusion at various levels of biometric recognition process, with the best algorithms performing under 2 second on an IPhone 5s. It is noted that the multimodal biometric system outperforms the unimodal biometric systems in terms of both performance and failure to enroll rates.Introduction -- Biometric systems -- Database -- Eye vaculature recognition -- Iris recognition in visible wavelength on smartphones -- Periocular recognition on smartphones -- Conclusions and future wor

    Plant Seed Identification

    Get PDF
    Plant seed identification is routinely performed for seed certification in seed trade, phytosanitary certification for the import and export of agricultural commodities, and regulatory monitoring, surveillance, and enforcement. Current identification is performed manually by seed analysts with limited aiding tools. Extensive expertise and time is required, especially for small, morphologically similar seeds. Computers are, however, especially good at recognizing subtle differences that humans find difficult to perceive. In this thesis, a 2D, image-based computer-assisted approach is proposed. The size of plant seeds is extremely small compared with daily objects. The microscopic images of plant seeds are usually degraded by defocus blur due to the high magnification of the imaging equipment. It is necessary and beneficial to differentiate the in-focus and blurred regions given that only sharp regions carry distinctive information usually for identification. If the object of interest, the plant seed in this case, is in- focus under a single image frame, the amount of defocus blur can be employed as a cue to separate the object and the cluttered background. If the defocus blur is too strong to obscure the object itself, sharp regions of multiple image frames acquired at different focal distance can be merged together to make an all-in-focus image. This thesis describes a novel non-reference sharpness metric which exploits the distribution difference of uniform LBP patterns in blurred and non-blurred image regions. It runs in realtime on a single core cpu and responses much better on low contrast sharp regions than the competitor metrics. Its benefits are shown both in defocus segmentation and focal stacking. With the obtained all-in-focus seed image, a scale-wise pooling method is proposed to construct its feature representation. Since the imaging settings in lab testing are well constrained, the seed objects in the acquired image can be assumed to have measureable scale and controllable scale variance. The proposed method utilizes real pixel scale information and allows for accurate comparison of seeds across scales. By cross-validation on our high quality seed image dataset, better identification rate (95%) was achieved compared with pre- trained convolutional-neural-network-based models (93.6%). It offers an alternative method for image based identification with all-in-focus object images of limited scale variance. The very first digital seed identification tool of its kind was built and deployed for test in the seed laboratory of Canadian food inspection agency (CFIA). The proposed focal stacking algorithm was employed to create all-in-focus images, whereas scale-wise pooling feature representation was used as the image signature. Throughput, workload, and identification rate were evaluated and seed analysts reported significantly lower mental demand (p = 0.00245) when using the provided tool compared with manual identification. Although the identification rate in practical test is only around 50%, I have demonstrated common mistakes that have been made in the imaging process and possible ways to deploy the tool to improve the recognition rate

    Integrating IoT and Novel Approaches to Enhance Electromagnetic Image Quality using Modern Anisotropic Diffusion and Speckle Noise Reduction Techniques

    Get PDF
    Electromagnetic imaging is becoming more important in many sectors, and this requires high-quality pictures for reliable analysis. This study makes use of the complementary relationship between IoT and current image processing methods to improve the quality of electromagnetic images. The research presents a new framework for connecting Internet of Things sensors to imaging equipment, allowing for instantaneous input and adjustment. At the same time, the suggested system makes use of sophisticated anisotropic diffusion algorithms to bring out key details and hide noise in electromagnetic pictures. In addition, a cutting-edge technique for reducing speckle noise is used to combat this persistent issue in electromagnetic imaging. The effectiveness of the suggested system was determined via a comparison to standard imaging techniques. There was a noticeable improvement in visual sharpness, contrast, and overall clarity without any loss of information, as shown by the results. Incorporating IoT sensors also facilitated faster calibration and real-time modifications, which opened up new possibilities for use in contexts with a high degree of variation. In fields where electromagnetic imaging plays a crucial role, such as medicine, remote sensing, and aerospace, the ramifications of this study are far-reaching. Our research demonstrates how the Internet of Things (IoT) and cutting-edge image processing have the potential to dramatically improve the functionality and versatility of electromagnetic imaging systems

    Face recognition by means of advanced contributions in machine learning

    Get PDF
    Face recognition (FR) has been extensively studied, due to both scientific fundamental challenges and current and potential applications where human identification is needed. FR systems have the benefits of their non intrusiveness, low cost of equipments and no useragreement requirements when doing acquisition, among the most important ones. Nevertheless, despite the progress made in last years and the different solutions proposed, FR performance is not yet satisfactory when more demanding conditions are required (different viewpoints, blocked effects, illumination changes, strong lighting states, etc). Particularly, the effect of such non-controlled lighting conditions on face images leads to one of the strongest distortions in facial appearance. This dissertation addresses the problem of FR when dealing with less constrained illumination situations. In order to approach the problem, a new multi-session and multi-spectral face database has been acquired in visible, Near-infrared (NIR) and Thermal infrared (TIR) spectra, under different lighting conditions. A theoretical analysis using information theory to demonstrate the complementarities between different spectral bands have been firstly carried out. The optimal exploitation of the information provided by the set of multispectral images has been subsequently addressed by using multimodal matching score fusion techniques that efficiently synthesize complementary meaningful information among different spectra. Due to peculiarities in thermal images, a specific face segmentation algorithm has been required and developed. In the final proposed system, the Discrete Cosine Transform as dimensionality reduction tool and a fractional distance for matching were used, so that the cost in processing time and memory was significantly reduced. Prior to this classification task, a selection of the relevant frequency bands is proposed in order to optimize the overall system, based on identifying and maximizing independence relations by means of discriminability criteria. The system has been extensively evaluated on the multispectral face database specifically performed for our purpose. On this regard, a new visualization procedure has been suggested in order to combine different bands for establishing valid comparisons and giving statistical information about the significance of the results. This experimental framework has more easily enabled the improvement of robustness against training and testing illumination mismatch. Additionally, focusing problem in thermal spectrum has been also addressed, firstly, for the more general case of the thermal images (or thermograms), and then for the case of facialthermograms from both theoretical and practical point of view. In order to analyze the quality of such facial thermograms degraded by blurring, an appropriate algorithm has been successfully developed. Experimental results strongly support the proposed multispectral facial image fusion, achieving very high performance in several conditions. These results represent a new advance in providing a robust matching across changes in illumination, further inspiring highly accurate FR approaches in practical scenarios.El reconeixement facial (FR) ha estat àmpliament estudiat, degut tant als reptes fonamentals científics que suposa com a les aplicacions actuals i futures on requereix la identificació de les persones. Els sistemes de reconeixement facial tenen els avantatges de ser no intrusius,presentar un baix cost dels equips d’adquisició i no la no necessitat d’autorització per part de l’individu a l’hora de realitzar l'adquisició, entre les més importants. De totes maneres i malgrat els avenços aconseguits en els darrers anys i les diferents solucions proposades, el rendiment del FR encara no resulta satisfactori quan es requereixen condicions més exigents (diferents punts de vista, efectes de bloqueig, canvis en la il·luminació, condicions de llum extremes, etc.). Concretament, l'efecte d'aquestes variacions no controlades en les condicions d'il·luminació sobre les imatges facials condueix a una de les distorsions més accentuades sobre l'aparença facial. Aquesta tesi aborda el problema del FR en condicions d'il·luminació menys restringides. Per tal d'abordar el problema, hem adquirit una nova base de dades de cara multisessió i multiespectral en l'espectre infraroig visible, infraroig proper (NIR) i tèrmic (TIR), sota diferents condicions d'il·luminació. En primer lloc s'ha dut a terme una anàlisi teòrica utilitzant la teoria de la informació per demostrar la complementarietat entre les diferents bandes espectrals objecte d’estudi. L'òptim aprofitament de la informació proporcionada pel conjunt d'imatges multiespectrals s'ha abordat posteriorment mitjançant l'ús de tècniques de fusió de puntuació multimodals, capaces de sintetitzar de manera eficient el conjunt d’informació significativa complementària entre els diferents espectres. A causa de les característiques particulars de les imatges tèrmiques, s’ha requerit del desenvolupament d’un algorisme específic per la segmentació de les mateixes. En el sistema proposat final, s’ha utilitzat com a eina de reducció de la dimensionalitat de les imatges, la Transformada del Cosinus Discreta i una distància fraccional per realitzar les tasques de classificació de manera que el cost en temps de processament i de memòria es va reduir de forma significa. Prèviament a aquesta tasca de classificació, es proposa una selecció de les bandes de freqüències més rellevants, basat en la identificació i la maximització de les relacions d'independència per mitjà de criteris discriminabilitat, per tal d'optimitzar el conjunt del sistema. El sistema ha estat àmpliament avaluat sobre la base de dades de cara multiespectral, desenvolupada pel nostre propòsit. En aquest sentit s'ha suggerit l’ús d’un nou procediment de visualització per combinar diferents bandes per poder establir comparacions vàlides i donar informació estadística sobre el significat dels resultats. Aquest marc experimental ha permès més fàcilment la millora de la robustesa quan les condicions d’il·luminació eren diferents entre els processos d’entrament i test. De forma complementària, s’ha tractat la problemàtica de l’enfocament de les imatges en l'espectre tèrmic, en primer lloc, pel cas general de les imatges tèrmiques (o termogrames) i posteriorment pel cas concret dels termogrames facials, des dels punt de vista tant teòric com pràctic. En aquest sentit i per tal d'analitzar la qualitat d’aquests termogrames facials degradats per efectes de desenfocament, s'ha desenvolupat un últim algorisme. Els resultats experimentals recolzen fermament que la fusió d'imatges facials multiespectrals proposada assoleix un rendiment molt alt en diverses condicions d’il·luminació. Aquests resultats representen un nou avenç en l’aportació de solucions robustes quan es contemplen canvis en la il·luminació, i esperen poder inspirar a futures implementacions de sistemes de reconeixement facial precisos en escenaris no controlats.Postprint (published version
    • …
    corecore