14 research outputs found

    A first step to accelerating fingerprint matching based on deformable minutiae clustering

    Get PDF
    Fingerprint recognition is one of the most used biometric methods for authentication. The identification of a query fingerprint requires matching its minutiae against every minutiae of all the fingerprints of the database. The state-of-the-art matching algorithms are costly, from a computational point of view, and inefficient on large datasets. In this work, we include faster methods to accelerating DMC (the most accurate fingerprint matching algorithm based only on minutiae). In particular, we translate into C++ the functions of the algorithm which represent the most costly tasks of the code; we create a library with the new code and we link the library to the original C# code using a CLR Class Library project by means of a C++/CLI Wrapper. Our solution re-implements critical functions, e.g., the bit population count including a fast C++ PopCount library and the use of the squared Euclidean distance for calculating the minutiae neighborhood. The experimental results show a significant reduction of the execution time in the optimized functions of the matching algorithm. Finally, a novel approach to improve the matching algorithm, considering cache memory blocking and parallel data processing, is presented as future work.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Asynchronous processing for latent fingerprint identification on heterogeneous CPU-GPU systems

    Get PDF
    Latent fingerprint identification is one of the most essential identification procedures in criminal investigations. Addressing this task is challenging as (i) it requires analyzing massive databases in reasonable periods and (ii) it is commonly solved by combining different methods with very complex data-dependencies, which make fully exploiting heterogeneous CPU-GPU systems very complex. Most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time. Indeed, the most accurate approach was designed for one single thread. This work introduces the fastest methodology for latent fingerprint identification maintaining high accuracy called Asynchronous processing for Latent Fingerprint Identification (ALFI). ALFI fully exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism for analyzing massive databases. Our approach reduces idle times in processing and exploits the inherent parallelism of comparing latent fingerprints to fingerprint impressions. We analyzed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results reveal that ALFI is in average 22x faster than the state-of-the-art algorithm, reaching a value of 44.7x for the best-studied case

    Novel parallel approaches to efficiently solve spatial problems on heterogeneous CPU-GPU systems

    Get PDF
    Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas.In recent years, approaches that seek to extract valuable information from large datasets have become particularly relevant in today's society. In this category, we can highlight those problems that comprise data analysis distributed across two-dimensional scenarios called spatial problems. These usually involve processing (i) a series of features distributed across a given plane or (ii) a matrix of values where each cell corresponds to a point on the plane. Therefore, we can see the open-ended and complex nature of spatial problems, but it also leaves room for imagination to be applied in the search for new solutions. One of the main complications we encounter when dealing with spatial problems is that they are very computationally intensive, typically taking a long time to produce the desired result. This drawback is also an opportunity to use heterogeneous systems to address spatial problems more efficiently. Heterogeneous systems give the developer greater freedom to speed up suitable algorithms by increasing the parallel programming options available, making it possible for different parts of a program to run on the dedicated hardware that suits them best. Several of the spatial problems that have not been optimised for heterogeneous systems cover very diverse areas that seem vastly different at first sight. However, they are closely related due to common data processing requirements, making them suitable for using dedicated hardware. In particular, this thesis provides new parallel approaches to tackle the following three crucial spatial problems: latent fingerprint identification, total viewshed computation, and path planning based on maximising visibility in large regions. Latent fingerprint identification is one of the essential identification procedures in criminal investigations. Addressing this task is difficult as (i) it requires analysing large databases in a short time, and (ii) it is commonly addressed by combining different methods with complex data dependencies, making it challenging to exploit parallelism on heterogeneous CPU-GPU systems. Moreover, most efforts in this context focus on improving the accuracy of the approaches and neglect reducing the processing time—the most accurate algorithm was designed to process the fingerprints using a single thread. We developed a new methodology to address the latent fingerprint identification problem called “Asynchronous processing for Latent Fingerprint Identification” (ALFI) that speeds up processing while maintaining high accuracy. ALFI exploits all the resources of CPU-GPU systems using asynchronous processing and fine-coarse parallelism to analyse massive fingerprint databases. We assessed the performance of ALFI on Linux and Windows operating systems using the well-known NIST/FVC databases. Experimental results revealed that ALFI is on average 22x faster than the state-of-the-art identification algorithm, reaching a speed-up of 44.7x for the best-studied case. In terrain analysis, Digital Elevation Models (DEMs) are relevant datasets used as input to those algorithms that typically sweep the terrain to analyse its main topological features such as visibility, elevation, and slope. The most challenging computation related to this topic is the total viewshed problem. It involves computing the viewshed—the visible area of the terrain—for each of the points in the DEM. The algorithms intended to solve this problem require many memory accesses to 2D arrays, which, despite being regular, lead to poor data locality in memory. We proposed a methodology called “skewed Digital Elevation Model” (sDEM) that substantially improves the locality of memory accesses and exploits the inherent parallelism of rotational sweep-based algorithms. Particularly, sDEM applies a data relocation technique before accessing the memory and computing the viewshed, thus significantly reducing the execution time. Different implementations are provided for single-core, multi-core, single-GPU, and multi-GPU platforms. We carried out two experiments to compare sDEM with (i) the most used geographic information systems (GIS) software and (ii) the state-of-the-art algorithm for solving the total viewshed problem. In the first experiment, sDEM results on average 8.8x faster than current GIS software, despite considering only a few points because of the limitations of the GIS software. In the second experiment, sDEM is 827.3x faster than the state-of-the-art algorithm considering the best case. The use of Unmanned Aerial Vehicles (UAVs) with multiple onboard sensors has grown enormously in tasks involving terrain coverage, such as environmental and civil monitoring, disaster management, and forest fire fighting. Many of these tasks require a quick and early response, which makes maximising the land covered from the flight path an essential goal, especially when the area to be monitored is irregular, large, and includes many blind spots. In this regard, state-of-the-art total viewshed algorithms can help analyse large areas and find new paths providing all-round visibility. We designed a new heuristic called “Visibility-based Path Planning” (VPP) to solve the path planning problem in large areas based on a thorough visibility analysis. VPP generates flyable paths that provide high visual coverage to monitor forest regions using the onboard camera of a single UAV. For this purpose, the hidden areas of the target territory are identified and considered when generating the path. Simulation results showed that VPP covers up to 98.7% of the Montes de Malaga Natural Park and 94.5% of the Sierra de las Nieves National Park, both located in the province of Malaga (Spain). In addition, a real flight test confirmed the high visibility achieved using VPP. Our methodology and analysis can be easily applied to enhance monitoring in other large outdoor areas

    An Analysis on Adversarial Machine Learning: Methods and Applications

    Get PDF
    Deep learning has witnessed astonishing advancement in the last decade and revolutionized many fields ranging from computer vision to natural language processing. A prominent field of research that enabled such achievements is adversarial learning, investigating the behavior and functionality of a learning model in presence of an adversary. Adversarial learning consists of two major trends. The first trend analyzes the susceptibility of machine learning models to manipulation in the decision-making process and aims to improve the robustness to such manipulations. The second trend exploits adversarial games between components of the model to enhance the learning process. This dissertation aims to provide an analysis on these two sides of adversarial learning and harness their potential for improving the robustness and generalization of deep models. In the first part of the dissertation, we study the adversarial susceptibility of deep learning models. We provide an empirical analysis on the extent of vulnerability by proposing two adversarial attacks that explore the geometric and frequency-domain characteristics of inputs to manipulate deep decisions. Afterward, we formalize the susceptibility of deep networks using the first-order approximation of the predictions and extend the theory to the ensemble classification scheme. Inspired by theoretical findings, we formalize a reliable and practical defense against adversarial examples to robustify ensembles. We extend this part by investigating the shortcomings of \gls{at} and highlight that the popular momentum stochastic gradient descent, developed essentially for natural training, is not proper for optimization in adversarial training since it is not designed to be robust against the chaotic behavior of gradients in this setup. Motivated by these observations, we develop an optimization method that is more suitable for adversarial training. In the second part of the dissertation, we harness adversarial learning to enhance the generalization and performance of deep networks in discriminative and generative tasks. We develop several models for biometric identification including fingerprint distortion rectification and latent fingerprint reconstruction. In particular, we develop a ridge reconstruction model based on generative adversarial networks that estimates the missing ridge information in latent fingerprints. We introduce a novel modification that enables the generator network to preserve the ID information during the reconstruction process. To address the scarcity of data, {\it e.g.}, in latent fingerprint analysis, we develop a supervised augmentation technique that combines input examples based on their salient regions. Our findings advocate that adversarial learning improves the performance and reliability of deep networks in a wide range of applications

    Surface analysis and fingerprint recognition from multi-light imaging collections

    Get PDF
    Multi-light imaging captures a scene from a fixed viewpoint through multiple photographs, each of which are illuminated from a different direction. Every image reveals information about the surface, with the intensity reflected from each point being measured for all lighting directions. The images captured are known as multi-light image collections (MLICs), for which a variety of techniques have been developed over recent decades to acquire information from the images. These techniques include shape from shading, photometric stereo and reflectance transformation imaging (RTI). Pixel coordinates from one image in a MLIC will correspond to exactly the same position on the surface across all images in the MLIC since the camera does not move. We assess the relevant literature to the methods presented in this thesis in chapter 1 and describe different types of reflections and surface types, as well as explaining the multi-light imaging process. In chapter 2 we present a novel automated RTI method which requires no calibration equipment (i.e. shiny reference spheres or 3D printed structures as other methods require) and automatically computes the lighting direction and compensates for non-uniform illumination. Then in chapter 3 we describe our novel MLIC method termed Remote Extraction of Latent Fingerprints (RELF) which segments each multi-light imaging photograph into superpixels (small groups of pixels) and uses a neural network classifier to determine whether or not the superpixel contains fingerprint. The RELF algorithm then mosaics these superpixels which are classified as fingerprint together in order to obtain a complete latent print image, entirely contactlessly. In chapter 4 we detail our work with the Metropolitan Police Service (MPS) UK, who described to us with their needs and requirements which helped us to create a prototype RELF imaging device which is now being tested by MPS officers who are validating the quality of the latent prints extracted using our technique. In chapter 5 we then further developed our multi-light imaging latent fingerprint technique to extract latent prints from curved surfaces and automatically correct for surface curvature distortions. We have a patent pending for this method

    An evaluation of the mechanisms of recovery of DNA and fingerprints from fire scenes

    Get PDF
    Incidents involving the intentional or deliberate setting of a fire within a compartment are frequently difficult to investigate both because of the damage to the property in question and the apparent lack of forensic evidence which could be used to potentially identify a suspect. The recovery of such evidence in the form of DNA and fingerprints from a fire scene would therefore be advantageous. During this project, replicate samples of DNA and fingerprints were deposited on both porous and non porous surfaces which were then exposed to laboratory controlled elevated temperatures for various time periods. In each case replicate DNA samples or replicate depleted series of fingerprint samples were used to produce robust data sets for subsequent statistical analysis. DNA and fingerprint samples were also exposed to a real fire environment using a fire training facility in order to simulate operational conditions. The results obtained suggest that the optimum recovery method for low template DNA was to use a wet followed by a dry cotton swabbing action of the surface before combining the two swabs for extraction. When the DNA was exposed to elevated temperatures in a controlled environment, there was a greater possibility of recovering a full SGM Plus profile if the DNA had been absorbed into a porous rather than non porous surface and the surface exposed up to a maximum of 100ËšC only. All of the samples which were exposed to the uncontrollable fire environment produced partial DNA profiles. The survivability and chemical enhancement of fingerprints deposited on both porous and non porous surfaces was robustly investigated where 70 replicate fingerprints were examined in each case for each test condition. For porous surfaces the most efficient sequence of enhancement techniques was an initial visual examination, followed by a fluorescence examination prior to treatment with DFO, and finally PD. It was found that this sequence could be employed for both wet and dry articles. In the case of dry, non porous surfaces, visual examination followed by fluorescence examination should be utilised prior to undertaking superglue - BY40 treatment. Powder suspension should be substituted for superglue in the case of wet items.Incidents involving the intentional or deliberate setting of a fire within a compartment are frequently difficult to investigate both because of the damage to the property in question and the apparent lack of forensic evidence which could be used to potentially identify a suspect. The recovery of such evidence in the form of DNA and fingerprints from a fire scene would therefore be advantageous. During this project, replicate samples of DNA and fingerprints were deposited on both porous and non porous surfaces which were then exposed to laboratory controlled elevated temperatures for various time periods. In each case replicate DNA samples or replicate depleted series of fingerprint samples were used to produce robust data sets for subsequent statistical analysis. DNA and fingerprint samples were also exposed to a real fire environment using a fire training facility in order to simulate operational conditions. The results obtained suggest that the optimum recovery method for low template DNA was to use a wet followed by a dry cotton swabbing action of the surface before combining the two swabs for extraction. When the DNA was exposed to elevated temperatures in a controlled environment, there was a greater possibility of recovering a full SGM Plus profile if the DNA had been absorbed into a porous rather than non porous surface and the surface exposed up to a maximum of 100ËšC only. All of the samples which were exposed to the uncontrollable fire environment produced partial DNA profiles. The survivability and chemical enhancement of fingerprints deposited on both porous and non porous surfaces was robustly investigated where 70 replicate fingerprints were examined in each case for each test condition. For porous surfaces the most efficient sequence of enhancement techniques was an initial visual examination, followed by a fluorescence examination prior to treatment with DFO, and finally PD. It was found that this sequence could be employed for both wet and dry articles. In the case of dry, non porous surfaces, visual examination followed by fluorescence examination should be utilised prior to undertaking superglue - BY40 treatment. Powder suspension should be substituted for superglue in the case of wet items

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Face Anti-Spoofing and Deep Learning Based Unsupervised Image Recognition Systems

    Get PDF
    One of the main problems of a supervised deep learning approach is that it requires large amounts of labeled training data, which are not always easily available. This PhD dissertation addresses the above-mentioned problem by using a novel unsupervised deep learning face verification system called UFace, that does not require labeled training data as it automatically, in an unsupervised way, generates training data from even a relatively small size of data. The method starts by selecting, in unsupervised way, k-most similar and k-most dissimilar images for a given face image. Moreover, this PhD dissertation proposes a new loss function to make it work with the proposed method. Specifically, the method computes loss function k times for both similar and dissimilar images for each input image in order to increase the discriminative power of feature vectors to learn the inter-class and intra-class face variability. The training is carried out based on the similar and dissimilar input face image vector rather than the same training input face image vector in order to extract face embeddings. The UFace is evaluated on four benchmark face verification datasets: Labeled Faces in the Wild dataset (LFW), YouTube Faces dataset (YTF), Cross-age LFW (CALFW) and Celebrities in Frontal Profile in the Wild (CFP-FP) datasets. The results show that we gain an accuracy of 99.40\%, 96.04\%, 95.12\% and 97.89\% respectively. The achieved results, despite being unsupervised, is on par to a similar but fully supervised methods. Another, related to face verification, area of research is on face anti-spoofing systems. State-of-the-art face anti-spoofing systems use either deep learning, or manually extracted image quality features. However, many of the existing image quality features used in face anti-spoofing systems are not well discriminating spoofed and genuine faces. Additionally, State-of-the-art face anti-spoofing systems that use deep learning approaches do not generalize well. Thus, to address the above problem, this PhD dissertation proposes hybrid face anti-spoofing system that considers the best from image quality feature and deep learning approaches. This work selects and proposes a set of seven novel no-reference image quality features measurement, that discriminate well between spoofed and genuine faces, to complement the deep learning approach. It then, proposes two approaches: In the first approach, the scores from the image quality features are fused with the deep learning classifier scores in a weighted fashion. The combined scores are used to determine whether a given input face image is genuine or spoofed. In the second approach, the image quality features are concatenated with the deep learning features. Then, the concatenated features vector is fed to the classifier to improve the performance and generalization of anti-spoofing system. Extensive evaluations are conducted to evaluate their performance on five benchmark face anti-spoofing datasets: Replay-Attack, CASIA-MFSD, MSU-MFSD, OULU-NPU and SiW. Experiments on these datasets show that it gives better results than several of the state-of-the-art anti-spoofing systems in many scenarios

    Generative Adversarial Network and Its Application in Aerial Vehicle Detection and Biometric Identification System

    Get PDF
    In recent years, generative adversarial networks (GANs) have shown great potential in advancing the state-of-the-art in many areas of computer vision, most notably in image synthesis and manipulation tasks. GAN is a generative model which simultaneously trains a generator and a discriminator in an adversarial manner to produce real-looking synthetic data by capturing the underlying data distribution. Due to its powerful ability to generate high-quality and visually pleasingresults, we apply it to super-resolution and image-to-image translation techniques to address vehicle detection in low-resolution aerial images and cross-spectral cross-resolution iris recognition. First, we develop a Multi-scale GAN (MsGAN) with multiple intermediate outputs, which progressively learns the details and features of the high-resolution aerial images at different scales. Then the upscaled super-resolved aerial images are fed to a You Only Look Once-version 3 (YOLO-v3) object detector and the detection loss is jointly optimized along with a super-resolution loss to emphasize target vehicles sensitive to the super-resolution process. There is another problem that remains unsolved when detection takes place at night or in a dark environment, which requires an IR detector. Training such a detector needs a lot of infrared (IR) images. To address these challenges, we develop a GAN-based joint cross-modal super-resolution framework where low-resolution (LR) IR images are translated and super-resolved to high-resolution (HR) visible (VIS) images before applying detection. This approach significantly improves the accuracy of aerial vehicle detection by leveraging the benefits of super-resolution techniques in a cross-modal domain. Second, to increase the performance and reliability of deep learning-based biometric identification systems, we focus on developing conditional GAN (cGAN) based cross-spectral cross-resolution iris recognition and offer two different frameworks. The first approach trains a cGAN to jointly translate and super-resolve LR near-infrared (NIR) iris images to HR VIS iris images to perform cross-spectral cross-resolution iris matching to the same resolution and within the same spectrum. In the second approach, we design a coupled GAN (cpGAN) architecture to project both VIS and NIR iris images into a low-dimensional embedding domain. The goal of this architecture is to ensure maximum pairwise similarity between the feature vectors from the two iris modalities of the same subject. We have also proposed a pose attention-guided coupled profile-to-frontal face recognition network to learn discriminative and pose-invariant features in an embedding subspace. To show that the feature vectors learned by this deep subspace can be used for other tasks beyond recognition, we implement a GAN architecture which is able to reconstruct a frontal face from its corresponding profile face. This capability can be used in various face analysis tasks, such as emotion detection and expression tracking, where having a frontal face image can improve accuracy and reliability. Overall, our research works have shown its efficacy by achieving new state-of-the-art results through extensive experiments on publicly available datasets reported in the literature

    Advanced Biometrics with Deep Learning

    Get PDF
    Biometrics, such as fingerprint, iris, face, hand print, hand vein, speech and gait recognition, etc., as a means of identity management have become commonplace nowadays for various applications. Biometric systems follow a typical pipeline, that is composed of separate preprocessing, feature extraction and classification. Deep learning as a data-driven representation learning approach has been shown to be a promising alternative to conventional data-agnostic and handcrafted pre-processing and feature extraction for biometric systems. Furthermore, deep learning offers an end-to-end learning paradigm to unify preprocessing, feature extraction, and recognition, based solely on biometric data. This Special Issue has collected 12 high-quality, state-of-the-art research papers that deal with challenging issues in advanced biometric systems based on deep learning. The 12 papers can be divided into 4 categories according to biometric modality; namely, face biometrics, medical electronic signals (EEG and ECG), voice print, and others
    corecore