3,574 research outputs found

    On Generative Adversarial Network Based Synthetic Iris Presentation Attack And Its Detection

    Get PDF
    Human iris is considered a reliable and accurate modality for biometric recognition due to its unique texture information. Reliability and accuracy of iris biometric modality have prompted its large-scale deployment for critical applications such as border control and national identification projects. The extensive growth of iris recognition systems has raised apprehensions about the susceptibility of these systems to various presentation attacks. In this thesis, a novel iris presentation attack using deep learning based synthetically generated iris images is presented. Utilizing the generative capability of deep convolutional generative adversarial networks and iris quality metrics, a new framework, named as iDCGAN is proposed for creating realistic appearing synthetic iris images. In-depth analysis is performed using quality score distributions of real and synthetically generated iris images to understand the effectiveness of the proposed approach. We also demonstrate that synthetically generated iris images can be used to attack existing iris recognition systems. As synthetically generated iris images can be effectively deployed in iris presentation attacks, it is important to develop accurate iris presentation attack detection algorithms which can distinguish such synthetic iris images from real iris images. For this purpose, a novel structural and textural feature-based iris presentation attack detection framework (DESIST) is proposed. The key emphasis of DESIST is on developing a unified framework for detecting a medley of iris presentation attacks, including synthetic iris. Experimental evaluations showcase the efficacy of the proposed DESIST framework in detecting synthetic iris presentation attacks

    IRDO: Iris Recognition by Fusion of DTCWT and OLBP

    Get PDF
    Iris Biometric is a physiological trait of human beings. In this paper, we propose Iris an Recognition using Fusion of Dual Tree Complex Wavelet Transform (DTCWT) and Over Lapping Local Binary Pattern (OLBP) Features. An eye is preprocessed to extract the iris part and obtain the Region of Interest (ROI) area from an iris. The complex wavelet features are extracted for region from the Iris DTCWT. OLBP is further applied on ROI to generate features of magnitude coefficients. The resultant features are generated by fusing DTCWT and OLBP using arithmetic addition. The Euclidean Distance (ED) is used to compare test iris with database iris features to identify a person. It is observed that the values of Total Success Rate (TSR) and Equal Error Rate (EER) are better in the case of proposed IRDO compared to the state-of-the art technique

    Generation Of An Accurate, Metric Spatial Database Of A Large Multi Storied Building

    Get PDF
    This thesis presents the development of a novel method to generate an accurate, metric spatial database of a large multi storied building during construction. The algorithm uses the 3D CAD model of the building and the video of the structure captured by an Unmanned Aircraft System (UAS). The spatial database is then used to perform several inspection procedures such as, metric data analysis, spatial query for images, visualization through 3D textured model. The video is processed using a simultaneous localization and mapping (SLAM) system. SLAM generates a sparse 3D map of the environment. Our algorithm registers the 3D map with the 3D CAD model to generate the accurate metric spatial database. The user can click on the desired part of the CAD model for inspection and the image of that part will be shown by using the spatial indexing between the CAD model and the spatially distributed images. The image returned by the spatial query can be used to extract metric information. The spatial database is also used to generate a 3D textured model which provides a visual as-built documentation. The metric data calculation and textured model reconstruction methods have been compared to the state of the art Pix4D software (Latest Release (Version 3.1)). The proposed method has a mean squared error (MSE) of 31.9 cm2 and standard deviation of 4.28 cm where Pix4D had a higher MSE of 45.6 cm2 and standard deviation of 4.91 cm. Using statistical t-test and ANOVA tests we have shown that we are statistically 99% confident that the proposed algorithm has performed better than Pix4D

    Robot guidance using machine vision techniques in industrial environments: A comparative review

    Get PDF
    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works
    • 

    corecore