20 research outputs found

    Robust Minutiae Extractor: Integrating Deep Networks and Fingerprint Domain Knowledge

    Full text link
    We propose a fully automatic minutiae extractor, called MinutiaeNet, based on deep neural networks with compact feature representation for fast comparison of minutiae sets. Specifically, first a network, called CoarseNet, estimates the minutiae score map and minutiae orientation based on convolutional neural network and fingerprint domain knowledge (enhanced image, orientation field, and segmentation map). Subsequently, another network, called FineNet, refines the candidate minutiae locations based on score map. We demonstrate the effectiveness of using the fingerprint domain knowledge together with the deep networks. Experimental results on both latent (NIST SD27) and plain (FVC 2004) public domain fingerprint datasets provide comprehensive empirical support for the merits of our method. Further, our method finds minutiae sets that are better in terms of precision and recall in comparison with state-of-the-art on these two datasets. Given the lack of annotated fingerprint datasets with minutiae ground truth, the proposed approach to robust minutiae detection will be useful to train network-based fingerprint matching algorithms as well as for evaluating fingerprint individuality at scale. MinutiaeNet is implemented in Tensorflow: https://github.com/luannd/MinutiaeNetComment: Accepted to International Conference on Biometrics (ICB 2018

    Recreating Fingerprint Images by Convolutional Neural Network Autoencoder Architecture

    Get PDF
    Fingerprint recognition systems have been applied widely to adopt accurate and reliable biometric identification between individuals. Deep learning, especially Convolutional Neural Network (CNN) has made a tremendous success in the field of computer vision for pattern recognition. Several approaches have been applied to reconstruct fingerprint images. However, these algorithms encountered problems with various overlapping patterns and poor quality on the images. In this work, a convolutional neural network autoencoder has been used to reconstruct fingerprint images. An autoencoder is a technique, which is able to replicate data in the images. The advantage of convolutional neural networks makes it suitable for feature extraction. Four datasets of fingerprint images have been used to prove the robustness of the proposed architecture. The dataset of fingerprint images has been collected from various real resources. These datasets include a fingerprint verification competition (FVC2004) database, which has been distorted. The proposed approach has been assessed by calculating the cumulative match characteristics (CMC) between the reconstructed and the original features. We obtained promising results of identification rate from four datasets of fingerprints images (Dataset I, Dataset II, Dataset III, Dataset IV) with 98.1%, 97%, 95.9%, and 95.02% respectively by CNN autoencoder. The proposed architecture was tested and compared to the other state-of-the-art methods. The achieved experimental results show that the proposed solution is suitable for recreating a complex context of fingerprinting images

    Cosaliency detection based on intrasaliency prior transfer and deep intersaliency mining

    Get PDF
    As an interesting and emerging topic, cosaliency detection aims at simultaneously extracting common salient objects in multiple related images. It differs from the conventional saliency detection paradigm in which saliency detection for each image is determined one by one independently without taking advantage of the homogeneity in the data pool of multiple related images. In this paper, we propose a novel cosaliency detection approach using deep learning models. Two new concepts, called intrasaliency prior transfer and deep intersaliency mining, are introduced and explored in the proposed work. For the intrasaliency prior transfer, we build a stacked denoising autoencoder (SDAE) to learn the saliency prior knowledge from auxiliary annotated data sets and then transfer the learned knowledge to estimate the intrasaliency for each image in cosaliency data sets. For the deep intersaliency mining, we formulate it by using the deep reconstruction residual obtained in the highest hidden layer of a self-trained SDAE. The obtained deep intersaliency can extract more intrinsic and general hidden patterns to discover the homogeneity of cosalient objects in terms of some higher level concepts. Finally, the cosaliency maps are generated by weighted integration of the proposed intrasaliency prior, deep intersaliency, and traditional shallow intersaliency. Comprehensive experiments over diverse publicly available benchmark data sets demonstrate consistent performance gains of the proposed method over the state-of-the-art cosaliency detection methods
    corecore