47 research outputs found
LivDet in Action - Fingerprint Liveness Detection Competition 2019
The International Fingerprint liveness Detection Competition (LivDet) is an
open and well-acknowledged meeting point of academies and private companies
that deal with the problem of distinguishing images coming from reproductions
of fingerprints made of artificial materials and images relative to real
fingerprints. In this edition of LivDet we invited the competitors to propose
integrated algorithms with matching systems. The goal was to investigate at
which extent this integration impact on the whole performance. Twelve
algorithms were submitted to the competition, eight of which worked on
integrated systems.Comment: Preprint version of a paper accepted at ICB 201
Fingerprint Liveness Detection using Minutiae-Independent Dense Sampling of Local Patches
Fingerprint recognition and matching is a common form of user authentication.
While a fingerprint is unique to each individual, authentication is vulnerable
when an attacker can forge a copy of the fingerprint (spoof). To combat these
spoofed fingerprints, spoof detection and liveness detection algorithms are
currently being researched as countermeasures to this security vulnerability.
This paper introduces a fingerprint anti-spoofing mechanism using machine
learning.Comment: Submitted, peer-reviewed, accepted, and under publication with
Springer Natur
Feature Fusion for Fingerprint Liveness Detection
For decades, fingerprints have been the most widely used biometric trait in identity
recognition systems, thanks to their natural uniqueness, even in rare cases such as
identical twins. Recently, we witnessed a growth in the use of fingerprint-based
recognition systems in a large variety of devices and applications. This, as a consequence,
increased the benefits for offenders capable of attacking these systems. One
of the main issues with the current fingerprint authentication systems is that, even
though they are quite accurate in terms of identity verification, they can be easily
spoofed by presenting to the input sensor an artificial replica of the fingertip skin’s
ridge-valley patterns.
Due to the criticality of this threat, it is crucial to develop countermeasure
methods capable of facing and preventing these kind of attacks. The most effective
counter–spoofing methods are those trying to distinguish between a "live" and a
"fake" fingerprint before it is actually submitted to the recognition system. According
to the technology used, these methods are mainly divided into hardware and software-based
systems. Hardware-based methods rely on extra sensors to gain more pieces
of information regarding the vitality of the fingerprint owner. On the contrary,
software-based methods merely rely on analyzing the fingerprint images acquired
by the scanner. Software-based methods can then be further divided into dynamic,
aimed at analyzing sequences of images to capture those vital signs typical of a real
fingerprint, and static, which process a single fingerprint impression. Among these
different approaches, static software-based methods come with three main benefits.
First, they are cheaper, since they do not require the deployment of any additional
sensor to perform liveness detection. Second, they are faster since the information
they require is extracted from the same input image acquired for the identification
task. Third, they are potentially capable of tackling novel forms of attack through an
update of the software. The interest in this type of counter–spoofing methods is at the basis of this
dissertation, which addresses the fingerprint liveness detection under a peculiar
perspective, which stems from the following consideration. Generally speaking, this
problem has been tackled in the literature with many different approaches. Most of
them are based on first identifying the most suitable image features for the problem
in analysis and, then, into developing some classification system based on them. In
particular, most of the published methods rely on a single type of feature to perform
this task. Each of this individual features can be more or less discriminative and often
highlights some peculiar characteristics of the data in analysis, often complementary
with that of other feature. Thus, one possible idea to improve the classification
accuracy is to find effective ways to combine them, in order to mutually exploit their
individual strengths and soften, at the same time, their weakness. However, such a
"multi-view" approach has been relatively overlooked in the literature.
Based on the latter observation, the first part of this work attempts to investigate
proper feature fusion methods capable of improving the generalization and robustness
of fingerprint liveness detection systems and enhance their classification strength.
Then, in the second part, it approaches the feature fusion method in a different way,
that is by first dividing the fingerprint image into smaller parts, then extracting an
evidence about the liveness of each of these patches and, finally, combining all these
pieces of information in order to take the final classification decision.
The different approaches have been thoroughly analyzed and assessed by comparing
their results (on a large number of datasets and using the same experimental
protocol) with that of other works in the literature. The experimental results discussed
in this dissertation show that the proposed approaches are capable of obtaining
state–of–the–art results, thus demonstrating their effectiveness
One-shot lip-based biometric authentication: extending behavioral features with authentication phrase information
Lip-based biometric authentication (LBBA) is an authentication method based
on a person's lip movements during speech in the form of video data captured by
a camera sensor. LBBA can utilize both physical and behavioral characteristics
of lip movements without requiring any additional sensory equipment apart from
an RGB camera. State-of-the-art (SOTA) approaches use one-shot learning to
train deep siamese neural networks which produce an embedding vector out of
these features. Embeddings are further used to compute the similarity between
an enrolled user and a user being authenticated. A flaw of these approaches is
that they model behavioral features as style-of-speech without relation to what
is being said. This makes the system vulnerable to video replay attacks of the
client speaking any phrase. To solve this problem we propose a one-shot
approach which models behavioral features to discriminate against what is being
said in addition to style-of-speech. We achieve this by customizing the GRID
dataset to obtain required triplets and training a siamese neural network based
on 3D convolutions and recurrent neural network layers. A custom triplet loss
for batch-wise hard-negative mining is proposed. Obtained results using an
open-set protocol are 3.2% FAR and 3.8% FRR on the test set of the customized
GRID dataset. Additional analysis of the results was done to quantify the
influence and discriminatory power of behavioral and physical features for
LBBA.Comment: 28 pages, 10 figures, 7 table