517 research outputs found

    Occurrence and associative value of non-identifiable fingermarks

    Get PDF
    Fingermarks that have insufficient characteristics for identification often have discernible characteristics that could form the basis for lesser degrees of correspondence or probability of occurrence within a population. Currently, those latent prints that experts judge to be insufficient for identification are not used as associative evidence. How often do such prints occur and what is their potential value for association? The answers are important. We could be routinely setting aside a very important source of associative evidence, with high potential impact, in many cases; or such prints might be of very low utility, adding very little, or only very rarely contributing to cases in a meaningful way. The first step is to better understand the occurrence and range of associative value of these fingermarks. The project goal was to explore and test a theory that in large numbers of cases fingermarks of no value for identification purposes occur and are readily available, though not used, and yet have associative value that could provide useful information. Latent fingermarks were collected from nine state and local jurisdictions. Fingermarks included were those (1) collected in the course of investigations using existing jurisdictional procedures, (2) originally assessed by the laboratory as of no value for identification (NVID), (3) re-assessed by expert review as NVID, but with least three clear and reliable minutiae in relationship to one another, and (4) determined to show at least three auto-encoded minutiae. An expected associative value (ESLR) for each mark was measured, without reference to a putative source, based on modeling within-variability and between-variability of AFIS scores. This method incorporated (1) latest generation feature extraction, (2) a (minutiae-only) matcher, (3) a validated distortion model, and (4) NIST SD27 database calibration. Observed associative value distributions were determined for violent crimes, property crimes, and for existing objective measurements of latent print quality. 750 Non Identifiable Fingermarks (NIFMs) showed values of Log10 ESLR ranging from 1.05 to 10.88, with a mean value of 5.56 (s.d. 2.29), corresponding to an ESLR of approximately 380,000. It is clear that there are large numbers of cases where NIFMs occur that have high potential associative value as indicated by the ESLR. These NIFMs are readily available, but not used, yet have associative value that could provide useful information. These findings lead to the follow-on questions, “How useful would NIFM evidence be in actual practice?” and, “What developments or improvements are needed to maximize this contribution?

    Facilitating sensor interoperability and incorporating quality in fingerprint matching systems

    Get PDF
    This thesis addresses the issues of sensor interoperability and quality in the context of fingerprints and makes a three-fold contribution. The first contribution is a method to facilitate fingerprint sensor interoperability that involves the comparison of fingerprint images originating from multiple sensors. The proposed technique models the relationship between images acquired by two different sensors using a Thin Plate Spline (TPS) function. Such a calibration model is observed to enhance the inter-sensor matching performance on the MSU dataset containing images from optical and capacitive sensors. Experiments indicate that the proposed calibration scheme improves the inter-sensor Genuine Accept Rate (GAR) by 35% to 40% at a False Accept Rate (FAR) of 0.01%. The second contribution is a technique to incorporate the local image quality information in the fingerprint matching process. Experiments on the FVC 2002 and 2004 databases suggest the potential of this scheme to improve the matching performance of a generic fingerprint recognition system. The final contribution of this thesis is a method for classifying fingerprint images into 3 categories: good, dry and smudged. Such a categorization would assist in invoking different image processing or matching schemes based on the nature of the input fingerprint image. A classification rate of 97.45% is obtained on a subset of the FVC 2004 DB1 database

    Blending techniques for underwater photomosaics

    Get PDF
    The creation of consistent underwater photomosaics is typically hampered by local misalignments and inhomogeneous illumination of the image frames, which introduce visible seams that complicate post processing of the mosaics for object recognition and shape extraction. In this thesis, methods are proposed to improve blending techniques for underwater photomosaics and the results are compared with traditional methods. Five specific techniques drawn from various areas of image processing, computer vision, and computer graphics have been tested: illumination correction based on the median mosaic, thin plate spline warping, perspective warping, graph-cut applied in the gradient domain and in the wavelet domain. A combination of the first two methods yields globally homogeneous underwater photomosaics with preserved continuous features. Further improvements are obtained with the graph-cut technique applied in the spatial domain

    A new algorithm for minutiae extraction and matching in fingerprint

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A novel algorithm for fingerprint template formation and matching in automatic fingerprint recognition has been developed. At present, fingerprint is being considered as the dominant biometric trait among all other biometrics due to its wide range of applications in security and access control. Most of the commercially established systems use singularity point (SP) or ‘core’ point for fingerprint indexing and template formation. The efficiency of these systems heavily relies on the detection of the core and the quality of the image itself. The number of multiple SPs or absence of ‘core’ on the image can cause some anomalies in the formation of the template and may result in high False Acceptance Rate (FAR) or False Rejection Rate (FRR). Also the loss of actual minutiae or appearance of new or spurious minutiae in the scanned image can contribute to the error in the matching process. A more sophisticated algorithm is therefore necessary in the formation and matching of templates in order to achieve low FAR and FRR and to make the identification more accurate. The novel algorithm presented here does not rely on any ‘core’ or SP thus makes the structure invariant with respect to global rotation and translation. Moreover, it does not need orientation of the minutiae points on which most of the established algorithm are based. The matching methodology is based on the local features of each minutiae point such as distances to its nearest neighbours and their internal angle. Using a publicly available fingerprint database, the algorithm has been evaluated and compared with other benchmark algorithms. It has been found that the algorithm has performed better compared to others and has been able to achieve an error equal rate of 3.5%

    SMART TECHNIQUES FOR FAST MEDICAL IMAGE ANALYSIS AND PROCESSING

    Get PDF
    Medical Imaging has become an important transversal applications and re- search field that embraces a great variety of sciences. Imaging is the central science of measurement in diagnosis and treating diseases. The effort of the technological progress has made possible human imaging starting from a single molecule to the whole body. The open challenge is to treat the huge amount of medical informations with the use of smart and fast techniques that allows clinical and images data analysis and processing. In this ph.D. Thesis, many issues have been addressed and a certain amount of improvement in various fields have been produced, such as biom- etry, organs and tissues segmentation, MRI thermometry, medical reports retrieval and classification. The topic prefixed at the beginning of this ph.D. route was to analyze, understand, and give a step over to various kind of problematics related to Medical Images and Data analysis, working closely to radiologist physicians, with specific equipments, and following the common denominator of fast and smart methodologies applied to the medical imaging issue. A series of contribution have been carried out in fields such as: • proposing two different kind of multimodal biometric authentication systems that investigates fingerprint and iris fusion and processing; • applying expert systems to the issue of data validation, comparing and validating data to two different methodologies that assess liver iron overload in thalassemic patients;• addressing and improving non-invasive referenceless thermometry by using Radial Basis Function as interpolator; • applying the multi-seed region growing method to the segmentation of CT liver dataset; • proposing a novel unsupervised voxel-based morphology method for MRI brain segmentation by using k-means clustering and neural net- work classification; • proposing a novel ontology-based algorithm for information retrieval from mammographic text reports. The above work has been developed with the cooperation of the medical staff of the “Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi” and the “Scuola di Specializzazione in Radiodiagnostica" of the Università degli Studi di Palermo. All the proposed contributions show good performance using the stan- dard metrics. Most of them have produced scientific publications in com- puter science venues as well as in radiological venues. In addition, some specific frameworks, such as OsiriX, have been used to improve usability and easiness of the developed systems

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    An Analysis on Adversarial Machine Learning: Methods and Applications

    Get PDF
    Deep learning has witnessed astonishing advancement in the last decade and revolutionized many fields ranging from computer vision to natural language processing. A prominent field of research that enabled such achievements is adversarial learning, investigating the behavior and functionality of a learning model in presence of an adversary. Adversarial learning consists of two major trends. The first trend analyzes the susceptibility of machine learning models to manipulation in the decision-making process and aims to improve the robustness to such manipulations. The second trend exploits adversarial games between components of the model to enhance the learning process. This dissertation aims to provide an analysis on these two sides of adversarial learning and harness their potential for improving the robustness and generalization of deep models. In the first part of the dissertation, we study the adversarial susceptibility of deep learning models. We provide an empirical analysis on the extent of vulnerability by proposing two adversarial attacks that explore the geometric and frequency-domain characteristics of inputs to manipulate deep decisions. Afterward, we formalize the susceptibility of deep networks using the first-order approximation of the predictions and extend the theory to the ensemble classification scheme. Inspired by theoretical findings, we formalize a reliable and practical defense against adversarial examples to robustify ensembles. We extend this part by investigating the shortcomings of \gls{at} and highlight that the popular momentum stochastic gradient descent, developed essentially for natural training, is not proper for optimization in adversarial training since it is not designed to be robust against the chaotic behavior of gradients in this setup. Motivated by these observations, we develop an optimization method that is more suitable for adversarial training. In the second part of the dissertation, we harness adversarial learning to enhance the generalization and performance of deep networks in discriminative and generative tasks. We develop several models for biometric identification including fingerprint distortion rectification and latent fingerprint reconstruction. In particular, we develop a ridge reconstruction model based on generative adversarial networks that estimates the missing ridge information in latent fingerprints. We introduce a novel modification that enables the generator network to preserve the ID information during the reconstruction process. To address the scarcity of data, {\it e.g.}, in latent fingerprint analysis, we develop a supervised augmentation technique that combines input examples based on their salient regions. Our findings advocate that adversarial learning improves the performance and reliability of deep networks in a wide range of applications

    영상 복원 문제의 변분법적 접근

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 수리과학부, 2013. 2. 강명주.Image restoration has been an active research area in image processing and computer vision during the past several decades. We explore variational partial differential equations (PDE) models in image restoration problem. We start our discussion by reviewing classical models, by which the works of this dissertation are highly motivated. The content of the dissertation is divided into two main subjects. First topic is on image denoising, where we propose non-convex hybrid total variation model, and then we apply iterative reweighted algorithm to solve the proposed model. Second topic is on image decomposition, in which we separate an image into structural component and oscillatory component using local gradient constraint.Abstract i 1 Introduction 1 1.1 Image restoration 2 1.2 Brief overview of the dissertation 3 2 Previous works 4 2.1 Image denoising 4 2.1.1 Fundamental model 4 2.1.2 Higher order model 7 2.1.3 Hybrid model 9 2.1.4 Non-convex model 12 2.2 Image decomposition 22 2.2.1 Meyers model 23 2.2.2 Nonlinear filter 24 3 Non-convex hybrid TV for image denoising 28 3.1 Variational model with non-convex hybrid TV 29 3.1.1 Non-convex TV model and non-convex HOTV model 29 3.1.2 The Proposed model: Non-convex hybrid TV model 31 3.2 Iterative reweighted hybrid Total Variation algorithm 33 3.3 Numerical experiments 35 3.3.1 Parameter values 37 3.3.2 Comparison between the non-convex TV model and the non-convex HOTV model 38 3.3.3 Comparison with other non-convex higher order regularizers 40 3.3.4 Comparison between two non-convex hybrid TV models 42 3.3.5 Comparison with Krishnan et al. [39] 43 3.3.6 Comparison with state-of-the-art 44 4 Image decomposition 59 4.1 Local gradient constraint 61 4.1.1 Texture estimator 62 4.2 The proposed model 65 4.2.1 Algorithm : Anisotropic TV-L2 67 4.2.2 Algorithm : Isotropic TV-L2 69 4.2.3 Algorithm : Isotropic TV-L1 71 4.3 Numerical experiments and discussion 72 5 Conclusion and future works 80 Abstract (in Korean) 92Docto

    Recent Application in Biometrics

    Get PDF
    In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers

    Advances in Artificial Intelligence: Models, Optimization, and Machine Learning

    Get PDF
    The present book contains all the articles accepted and published in the Special Issue “Advances in Artificial Intelligence: Models, Optimization, and Machine Learning” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of artificial intelligence and its subfields. These topics include, among others, deep learning and classic machine learning algorithms, neural modelling, architectures and learning algorithms, biologically inspired optimization algorithms, algorithms for autonomous driving, probabilistic models and Bayesian reasoning, intelligent agents and multiagent systems. We hope that the scientific results presented in this book will serve as valuable sources of documentation and inspiration for anyone willing to pursue research in artificial intelligence, machine learning and their widespread applications
    corecore