3,232 research outputs found

    Curved Gabor Filters for Fingerprint Image Enhancement

    Full text link
    Gabor filters play an important role in many application areas for the enhancement of various types of images and the extraction of Gabor features. For the purpose of enhancing curved structures in noisy images, we introduce curved Gabor filters which locally adapt their shape to the direction of flow. These curved Gabor filters enable the choice of filter parameters which increase the smoothing power without creating artifacts in the enhanced image. In this paper, curved Gabor filters are applied to the curved ridge and valley structure of low-quality fingerprint images. First, we combine two orientation field estimation methods in order to obtain a more robust estimation for very noisy images. Next, curved regions are constructed by following the respective local orientation and they are used for estimating the local ridge frequency. Lastly, curved Gabor filters are defined based on curved regions and they are applied for the enhancement of low-quality fingerprint images. Experimental results on the FVC2004 databases show improvements of this approach in comparison to state-of-the-art enhancement methods

    Biometric Systems

    Get PDF
    Because of the accelerating progress in biometrics research and the latest nation-state threats to security, this book's publication is not only timely but also much needed. This volume contains seventeen peer-reviewed chapters reporting the state of the art in biometrics research: security issues, signature verification, fingerprint identification, wrist vascular biometrics, ear detection, face detection and identification (including a new survey of face recognition), person re-identification, electrocardiogram (ECT) recognition, and several multi-modal systems. This book will be a valuable resource for graduate students, engineers, and researchers interested in understanding and investigating this important field of study

    Fingerabdruckswachstumvorhersage, Bildvorverarbeitung und Multi-level Judgment Aggregation

    Get PDF
    Im ersten Teil dieser Arbeit wird Fingerwachstum untersucht und eine Methode zur Vorhersage von Wachstum wird vorgestellt. Die EffektivitĂ€t dieser Methode wird mittels mehrerer Tests validiert. Vorverarbeitung von Fingerabdrucksbildern wird im zweiten Teil behandelt und neue Methoden zur SchĂ€tzung des Orientierungsfelds und der Ridge-Frequenz sowie zur Bildverbesserung werden vorgestellt: Die Line Sensor Methode zur OrientierungsfeldschĂ€tzung, gebogene Regionen zur Ridge-Frequenz-SchĂ€tzung und gebogene Gabor Filter zur Bildverbesserung. Multi-level Jugdment Aggregation wird eingefĂŒhrt als Design Prinzip zur Kombination mehrerer Methoden auf mehreren Verarbeitungsstufen. Schließlich wird Score Neubewertung vorgestellt, um Informationen aus der Vorverarbeitung mit in die Score Bildung einzubeziehen. Anhand eines Anwendungsbeispiels wird die Wirksamkeit dieses Ansatzes auf den verfĂŒgbaren FVC-Datenbanken gezeigt.Finger growth is studied in the first part of the thesis and a method for growth prediction is presented. The effectiveness of the method is validated in several tests. Fingerprint image preprocessing is discussed in the second part and novel methods for orientation field estimation, ridge frequency estimation and image enhancement are proposed: the line sensor method for orientation estimation provides more robustness to noise than state of the art methods. Curved regions are proposed for improving the ridge frequency estimation and curved Gabor filters for image enhancement. The notion of multi-level judgment aggregation is introduced as a design principle for combining different methods at all levels of fingerprint image processing. Lastly, score revaluation is proposed for incorporating information obtained during preprocessing into the score, and thus amending the quality of the similarity measure at the final stage. A sample application combines all proposed methods of the second part and demonstrates the validity of the approach by achieving massive verification performance improvements in comparison to state of the art software on all available databases of the fingerprint verification competitions (FVC)

    Enhancing Real-time Embedded Image Processing Robustness on Reconfigurable Devices for Critical Applications

    Get PDF
    Nowadays, image processing is increasingly used in several application fields, such as biomedical, aerospace, or automotive. Within these fields, image processing is used to serve both non-critical and critical tasks. As example, in automotive, cameras are becoming key sensors in increasing car safety, driving assistance and driving comfort. They have been employed for infotainment (non-critical), as well as for some driver assistance tasks (critical), such as Forward Collision Avoidance, Intelligent Speed Control, or Pedestrian Detection. The complexity of these algorithms brings a challenge in real-time image processing systems, requiring high computing capacity, usually not available in processors for embedded systems. Hardware acceleration is therefore crucial, and devices such as Field Programmable Gate Arrays (FPGAs) best fit the growing demand of computational capabilities. These devices can assist embedded processors by significantly speeding-up computationally intensive software algorithms. Moreover, critical applications introduce strict requirements not only from the real-time constraints, but also from the device reliability and algorithm robustness points of view. Technology scaling is highlighting reliability problems related to aging phenomena, and to the increasing sensitivity of digital devices to external radiation events that can cause transient or even permanent faults. These faults can lead to wrong information processed or, in the worst case, to a dangerous system failure. In this context, the reconfigurable nature of FPGA devices can be exploited to increase the system reliability and robustness by leveraging Dynamic Partial Reconfiguration features. The research work presented in this thesis focuses on the development of techniques for implementing efficient and robust real-time embedded image processing hardware accelerators and systems for mission-critical applications. Three main challenges have been faced and will be discussed, along with proposed solutions, throughout the thesis: (i) achieving real-time performances, (ii) enhancing algorithm robustness, and (iii) increasing overall system's dependability. In order to ensure real-time performances, efficient FPGA-based hardware accelerators implementing selected image processing algorithms have been developed. Functionalities offered by the target technology, and algorithm's characteristics have been constantly taken into account while designing such accelerators, in order to efficiently tailor algorithm's operations to available hardware resources. On the other hand, the key idea for increasing image processing algorithms' robustness is to introduce self-adaptivity features at algorithm level, in order to maintain constant, or improve, the quality of results for a wide range of input conditions, that are not always fully predictable at design-time (e.g., noise level variations). This has been accomplished by measuring at run-time some characteristics of the input images, and then tuning the algorithm parameters based on such estimations. Dynamic reconfiguration features of modern reconfigurable FPGA have been extensively exploited in order to integrate run-time adaptivity into the designed hardware accelerators. Tools and methodologies have been also developed in order to increase the overall system dependability during reconfiguration processes, thus providing safe run-time adaptation mechanisms. In addition, taking into account the target technology and the environments in which the developed hardware accelerators and systems may be employed, dependability issues have been analyzed, leading to the development of a platform for quickly assessing the reliability and characterizing the behavior of hardware accelerators implemented on reconfigurable FPGAs when they are affected by such faults

    Graph-Based Offline Signature Verification

    Get PDF
    Graphs provide a powerful representation formalism that offers great promise to benefit tasks like handwritten signature verification. While most state-of-the-art approaches to signature verification rely on fixed-size representations, graphs are flexible in size and allow modeling local features as well as the global structure of the handwriting. In this article, we present two recent graph-based approaches to offline signature verification: keypoint graphs with approximated graph edit distance and inkball models. We provide a comprehensive description of the methods, propose improvements both in terms of computational time and accuracy, and report experimental results for four benchmark datasets. The proposed methods achieve top results for several benchmarks, highlighting the potential of graph-based signature verification

    Auto Signature Verification Using Line Projection Features Combined With Different Classifiers and Selection Methods

    Get PDF
    : Signature verification plays a role in the commercial, legal and financial fields. The signature continues to be one of the most preferred types of authentication for many documents such as checks, credit card transaction receipts, and other legal documents. In this study, we propose a system for validating handwritten bank check signatures to determine whether the signature is original or forged. The proposed system includes several steps including improving the signature image quality, noise reduction, feature extraction, and analysis. The extracted features depend on the signature line and projection features. To verify signatures, different classification methods are used. The system is then trained with a set of signatures to demonstrate the validity of the proposed signature verification system. The experimental results show that the best accuracy of 100% was obtained by combining several classification methods

    Face Liveness Detection under Processed Image Attacks

    Get PDF
    Face recognition is a mature and reliable technology for identifying people. Due to high-deïŹnition cameras and supporting devices, it is considered the fastest and the least intrusive biometric recognition modality. Nevertheless, eïŹ€ective spooïŹng attempts on face recognition systems were found to be possible. As a result, various anti-spooïŹng algorithms were developed to counteract these attacks. They are commonly referred in the literature a liveness detection tests. In this research we highlight the eïŹ€ectiveness of some simple, direct spooïŹng attacks, and test one of the current robust liveness detection algorithms, i.e. the logistic regression based face liveness detection from a single image, proposed by the Tan et al. in 2010, against malicious attacks using processed imposter images. In particular, we study experimentally the eïŹ€ect of common image processing operations such as sharpening and smoothing, as well as corruption with salt and pepper noise, on the face liveness detection algorithm, and we ïŹnd that it is especially vulnerable against spooïŹng attempts using processed imposter images. We design and present a new facial database, the Durham Face Database, which is the ïŹrst, to the best of our knowledge, to have client, imposter as well as processed imposter images. Finally, we evaluate our claim on the eïŹ€ectiveness of proposed imposter image attacks using transfer learning on Convolutional Neural Networks. We verify that such attacks are more diïŹƒcult to detect even when using high-end, expensive machine learning techniques

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims
    • 

    corecore