5,629 research outputs found
Fingerprint Distortion Rectification using Deep Convolutional Neural Networks
Elastic distortion of fingerprints has a negative effect on the performance
of fingerprint recognition systems. This negative effect brings inconvenience
to users in authentication applications. However, in the negative recognition
scenario where users may intentionally distort their fingerprints, this can be
a serious problem since distortion will prevent recognition system from
identifying malicious users. Current methods aimed at addressing this problem
still have limitations. They are often not accurate because they estimate
distortion parameters based on the ridge frequency map and orientation map of
input samples, which are not reliable due to distortion. Secondly, they are not
efficient and requiring significant computation time to rectify samples. In
this paper, we develop a rectification model based on a Deep Convolutional
Neural Network (DCNN) to accurately estimate distortion parameters from the
input image. Using a comprehensive database of synthetic distorted samples, the
DCNN learns to accurately estimate distortion bases ten times faster than the
dictionary search methods used in the previous approaches. Evaluating the
proposed method on public databases of distorted samples shows that it can
significantly improve the matching performance of distorted samples.Comment: Accepted at ICB 201
Minutiae Extraction from Fingerprint Images - a Review
Fingerprints are the oldest and most widely used form of biometric
identification. Everyone is known to have unique, immutable fingerprints. As
most Automatic Fingerprint Recognition Systems are based on local ridge
features known as minutiae, marking minutiae accurately and rejecting false
ones is very important. However, fingerprint images get degraded and corrupted
due to variations in skin and impression conditions. Thus, image enhancement
techniques are employed prior to minutiae extraction. A critical step in
automatic fingerprint matching is to reliably extract minutiae from the input
fingerprint images. This paper presents a review of a large number of
techniques present in the literature for extracting fingerprint minutiae. The
techniques are broadly classified as those working on binarized images and
those that work on gray scale images directly.Comment: 12 pages; IJCSI International Journal of Computer Science Issues,
Vol. 8, Issue 5, September 201
Fingerprint Spoof Buster
The primary purpose of a fingerprint recognition system is to ensure a
reliable and accurate user authentication, but the security of the recognition
system itself can be jeopardized by spoof attacks. This study addresses the
problem of developing accurate, generalizable, and efficient algorithms for
detecting fingerprint spoof attacks. Specifically, we propose a deep
convolutional neural network based approach utilizing local patches centered
and aligned using fingerprint minutiae. Experimental results on three
public-domain LivDet datasets (2011, 2013, and 2015) show that the proposed
approach provides state-of-the-art accuracies in fingerprint spoof detection
for intra-sensor, cross-material, cross-sensor, as well as cross-dataset
testing scenarios. For example, in LivDet 2015, the proposed approach achieves
99.03% average accuracy over all sensors compared to 95.51% achieved by the
LivDet 2015 competition winners. Additionally, two new fingerprint presentation
attack datasets containing more than 20,000 images, using two different
fingerprint readers, and over 12 different spoof fabrication materials are
collected. We also present a graphical user interface, called Fingerprint Spoof
Buster, that allows the operator to visually examine the local regions of the
fingerprint highlighted as live or spoof, instead of relying on only a single
score as output by the traditional approaches.Comment: 13 page
Siamese Generative Adversarial Privatizer for Biometric Data
State-of-the-art machine learning algorithms can be fooled by carefully
crafted adversarial examples. As such, adversarial examples present a concrete
problem in AI safety. In this work we turn the tables and ask the following
question: can we harness the power of adversarial examples to prevent malicious
adversaries from learning identifying information from data while allowing
non-malicious entities to benefit from the utility of the same data? For
instance, can we use adversarial examples to anonymize biometric dataset of
faces while retaining usefulness of this data for other purposes, such as
emotion recognition? To address this question, we propose a simple yet
effective method, called Siamese Generative Adversarial Privatizer (SGAP), that
exploits the properties of a Siamese neural network to find discriminative
features that convey identifying information. When coupled with a generative
model, our approach is able to correctly locate and disguise identifying
information, while minimally reducing the utility of the privatized dataset.
Extensive evaluation on a biometric dataset of fingerprints and cartoon faces
confirms usefulness of our simple yet effective method.Comment: Paper accepted to ACCV 2018 (Asian Conference on Computer Vision
Making Palm Print Matching Mobile
With the growing importance of personal identification and authentication in
todays highly advanced world where most business and personal tasks are being
replaced by electronic means, the need for a technology that is able to
uniquely identify an individual and has high fraud resistance see the rise of
biometric technologies. Making biometric based solution mobile is a promising
trend. A new RST invariant square based palm print ROI extraction method was
successfully implemented and integrated into the current application suite. A
new set of palm print image database captured using embedded cameras in mobile
phone was created to test its robustness. Comparing to those extraction methods
that are based on boundary tracking of the overall hand shape that has
limitation of being unable to process palm print images that has one or more
fingers closed, the system can now effectively handle the segmentation of palm
print images with varying finger positioning. The high flexibility makes palm
print matching mobile possible.Comment: 9 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis
DWT Based Fingerprint Recognition using Non Minutiae Features
Forensic applications like criminal investigations, terrorist identification
and National security issues require a strong fingerprint data base and
efficient identification system. In this paper we propose DWT based Fingerprint
Recognition using Non Minutiae (DWTFR) algorithm. Fingerprint image is
decomposed into multi resolution sub bands of LL, LH, HL and HH by applying 3
level DWT. The Dominant local orientation angle {\theta} and Coherence are
computed on LL band only. The Centre Area Features and Edge Parameters are
determined on each DWT level by considering all four sub bands. The comparison
of test fingerprint with database fingerprint is decided based on the Euclidean
Distance of all the features. It is observed that the values of FAR, FRR and
TSR are improved compared to the existing algorithm.Comment: 9 page
Evaluating software-based fingerprint liveness detection using Convolutional Networks and Local Binary Patterns
With the growing use of biometric authentication systems in the past years,
spoof fingerprint detection has become increasingly important. In this work, we
implement and evaluate two different feature extraction techniques for
software-based fingerprint liveness detection: Convolutional Networks with
random weights and Local Binary Patterns. Both techniques were used in
conjunction with a Support Vector Machine (SVM) classifier. Dataset
Augmentation was used to increase classifier's performance and a variety of
preprocessing operations were tested, such as frequency filtering, contrast
equalization, and region of interest filtering. The experiments were made on
the datasets used in The Liveness Detection Competition of years 2009, 2011 and
2013, which comprise almost 50,000 real and fake fingerprints' images. Our best
method achieves an overall rate of 95.2% of correctly classified samples - an
improvement of 35% in test error when compared with the best previously
published results.Comment: arXiv admin note: text overlap with arXiv:1301.3557 by other author
A Fine-grained Indoor Location-based Social Network
Existing Location-based social networks (LBSNs), e.g., Foursquare, depend
mainly on GPS or cellular-based localization to infer users' locations.
However, GPS is unavailable indoors and cellular-based localization provides
coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor
environments, where people spend 89% of their time. This in turn affects the
user experience, in terms of the accuracy of the ranked list of venues,
especially for the small screens of mobile devices; misses business
opportunities; and leads to reduced venues coverage.
In this paper, we present CheckInside: a system that can provide a
fine-grained indoor location-based social network. CheckInside leverages the
crowd-sensed data collected from users' mobile devices during the check-in
operation and knowledge extracted from current LBSNs to associate a place with
a logical name and a semantic fingerprint. This semantic fingerprint is used to
obtain a more accurate list of nearby places as well as to automatically detect
new places with similar signature. A novel algorithm for detecting fake
check-ins and inferring a semantically-enriched floorplan is proposed as well
as an algorithm for enhancing the system performance based on the user implicit
feedback. Furthermore, CheckInside encompasses a coverage extender module to
automatically predict names of new venues increasing the coverage of current
LBSNs.
Experimental evaluation of CheckInside in four malls over the course of six
weeks with 20 participants shows that it can infer the actual user place within
the top five venues 99% of the time. This is compared to 17% only in the case
of current LBSNs. In addition, it increases the coverage of existing LBSNs by
more than 37%.Comment: 15 pages, 18 figure
HiDDeN: Hiding Data With Deep Networks
Recent work has shown that deep neural networks are highly sensitive to tiny
perturbations of input images, giving rise to adversarial examples. Though this
property is usually considered a weakness of learned models, we explore whether
it can be beneficial. We find that neural networks can learn to use invisible
perturbations to encode a rich amount of useful information. In fact, one can
exploit this capability for the task of data hiding. We jointly train encoder
and decoder networks, where given an input message and cover image, the encoder
produces a visually indistinguishable encoded image, from which the decoder can
recover the original message. We show that these encodings are competitive with
existing data hiding algorithms, and further that they can be made robust to
noise: our models learn to reconstruct hidden information in an encoded image
despite the presence of Gaussian blurring, pixel-wise dropout, cropping, and
JPEG compression. Even though JPEG is non-differentiable, we show that a robust
model can be trained using differentiable approximations. Finally, we
demonstrate that adversarial training improves the visual quality of encoded
images
VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting
Deep learning has become popular, and numerous cloud-based services are
provided to help customers develop and deploy deep learning applications.
Meanwhile, various attack techniques have also been discovered to stealthily
compromise the model's integrity. When a cloud customer deploys a deep learning
model in the cloud and serves it to end-users, it is important for him to be
able to verify that the deployed model has not been tampered with, and the
model's integrity is protected.
We propose a new low-cost and self-served methodology for customers to verify
that the model deployed in the cloud is intact, while having only black-box
access (e.g., via APIs) to the deployed model. Customers can detect arbitrary
changes to their deep learning models. Specifically, we define
\texttt{Sensitive-Sample} fingerprints, which are a small set of transformed
inputs that make the model outputs sensitive to the model's parameters. Even
small weight changes can be clearly reflected in the model outputs, and
observed by the customer. Our experiments on different types of model integrity
attacks show that we can detect model integrity breaches with high accuracy
(99\%) and low overhead (10 black-box model accesses)
- …