4,882 research outputs found
Assentication: User Deauthentication and Lunchtime Attack Mitigation with Seated Posture Biometric
Biometric techniques are often used as an extra security factor in
authenticating human users. Numerous biometrics have been proposed and
evaluated, each with its own set of benefits and pitfalls. Static biometrics
(such as fingerprints) are geared for discrete operation, to identify users,
which typically involves some user burden. Meanwhile, behavioral biometrics
(such as keystroke dynamics) are well suited for continuous, and sometimes more
unobtrusive, operation. One important application domain for biometrics is
deauthentication, a means of quickly detecting absence of a previously
authenticated user and immediately terminating that user's active secure
sessions. Deauthentication is crucial for mitigating so called Lunchtime
Attacks, whereby an insider adversary takes over (before any inactivity timeout
kicks in) authenticated state of a careless user who walks away from her
computer. Motivated primarily by the need for an unobtrusive and continuous
biometric to support effective deauthentication, we introduce PoPa, a new
hybrid biometric based on a human user's seated posture pattern. PoPa captures
a unique combination of physiological and behavioral traits. We describe a low
cost fully functioning prototype that involves an office chair instrumented
with 16 tiny pressure sensors. We also explore (via user experiments) how PoPa
can be used in a typical workplace to provide continuous authentication (and
deauthentication) of users. We experimentally assess viability of PoPa in terms
of uniqueness by collecting and evaluating posture patterns of a cohort of
users. Results show that PoPa exhibits very low false positive, and even lower
false negative, rates. In particular, users can be identified with, on average,
91.0% accuracy. Finally, we compare pros and cons of PoPa with those of several
prominent biometric based deauthentication techniques
Continuous Authentication for Voice Assistants
Voice has become an increasingly popular User Interaction (UI) channel,
mainly contributing to the ongoing trend of wearables, smart vehicles, and home
automation systems. Voice assistants such as Siri, Google Now and Cortana, have
become our everyday fixtures, especially in scenarios where touch interfaces
are inconvenient or even dangerous to use, such as driving or exercising.
Nevertheless, the open nature of the voice channel makes voice assistants
difficult to secure and exposed to various attacks as demonstrated by security
researchers. In this paper, we present VAuth, the first system that provides
continuous and usable authentication for voice assistants. We design VAuth to
fit in various widely-adopted wearable devices, such as eyeglasses,
earphones/buds and necklaces, where it collects the body-surface vibrations of
the user and matches it with the speech signal received by the voice
assistant's microphone. VAuth guarantees that the voice assistant executes only
the commands that originate from the voice of the owner. We have evaluated
VAuth with 18 users and 30 voice commands and find it to achieve an almost
perfect matching accuracy with less than 0.1% false positive rate, regardless
of VAuth's position on the body and the user's language, accent or mobility.
VAuth successfully thwarts different practical attacks, such as replayed
attacks, mangled voice attacks, or impersonation attacks. It also has low
energy and latency overheads and is compatible with most existing voice
assistants
Fusion of fingerprint presentation attacks detection and matching: a real approach from the LivDet perspective
The liveness detection ability is explicitly required for current personal verification systems in many security applications. As a matter of fact, the project of any biometric verification system cannot ignore the vulnerability to spoofing or presentation attacks (PAs), which must be addressed by effective countermeasures from the beginning of the design process. However, despite significant improvements, especially by adopting deep learning approaches to fingerprint Presentation Attack Detectors (PADs), current research did not state much about their effectiveness when embedded in fingerprint verification systems. We believe that the lack of works is explained by the lack of instruments to investigate the problem, that is, modelling the cause-effect relationships when two systems (spoof detection and matching) with non-zero error rates are integrated.
To solve this lack of investigations in the literature, we present in this PhD thesis a novel performance simulation model based on the probabilistic relationships between the Receiver Operating Characteristics (ROC) of the two systems when implemented sequentially. As a matter of fact, this is the most straightforward, flexible, and widespread approach. We carry out simulations on the PAD algorithms’ ROCs submitted to the editions of LivDet 2017-2019, the NIST Bozorth3, and the top-level VeriFinger 12.0 matchers. With the help of this simulator, the overall system performance can be predicted before actual implementation, thus simplifying the process of setting the best trade-off among error rates.
In the second part of this thesis, we exploit this model to define a practical evaluation criterion to assess whether operational points of the PAD exist that do not alter the expected or previous performance given by the verification system alone. Experimental simulations coupled with the theoretical expectations confirm that this trade-off allows a complete view of the sequential embedding potentials worthy of being extended to other integration approaches
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
In this work, we investigate the concept of biometric backdoors: a template
poisoning attack on biometric systems that allows adversaries to stealthily and
effortlessly impersonate users in the long-term by exploiting the template
update procedure. We show that such attacks can be carried out even by
attackers with physical limitations (no digital access to the sensor) and zero
knowledge of training data (they know neither decision boundaries nor user
template). Based on the adversaries' own templates, they craft several
intermediate samples that incrementally bridge the distance between their own
template and the legitimate user's. As these adversarial samples are added to
the template, the attacker is eventually accepted alongside the legitimate
user. To avoid detection, we design the attack to minimize the number of
rejected samples.
We design our method to cope with the weak assumptions for the attacker and
we evaluate the effectiveness of this approach on state-of-the-art face
recognition pipelines based on deep neural networks. We find that in scenarios
where the deep network is known, adversaries can successfully carry out the
attack over 70% of cases with less than ten injection attempts. Even in
black-box scenarios, we find that exploiting the transferability of adversarial
samples from surrogate models can lead to successful attacks in around 15% of
cases. Finally, we design a poisoning detection technique that leverages the
consistent directionality of template updates in feature space to discriminate
between legitimate and malicious updates. We evaluate such a countermeasure
with a set of intra-user variability factors which may present the same
directionality characteristics, obtaining equal error rates for the detection
between 7-14% and leading to over 99% of attacks being detected after only two
sample injections.Comment: 12 page
Vulnerability analysis of cyber-behavioral biometric authentication
Research on cyber-behavioral biometric authentication has traditionally assumed naïve (or zero-effort) impostors who make no attempt to generate sophisticated forgeries of biometric samples. Given the plethora of adversarial technologies on the Internet, it is questionable as to whether the zero-effort threat model provides a realistic estimate of how these authentication systems would perform in the wake of adversity. To better evaluate the efficiency of these authentication systems, there is need for research on algorithmic attacks which simulate the state-of-the-art threats.
To tackle this problem, we took the case of keystroke and touch-based authentication and developed a new family of algorithmic attacks which leverage the intrinsic instability and variability exhibited by users\u27 behavioral biometric patterns. For both fixed-text (or password-based) keystroke and continuous touch-based authentication, we: 1) Used a wide range of pattern analysis and statistical techniques to examine large repositories of biometrics data for weaknesses that could be exploited by adversaries to break these systems, 2) Designed algorithmic attacks whose mechanisms hinge around the discovered weaknesses, and 3) Rigorously analyzed the impact of the attacks on the best verification algorithms in the respective research domains.
When launched against three high performance password-based keystroke verification systems, our attacks increased the mean Equal Error Rates (EERs) of the systems by between 28.6% and 84.4% relative to the traditional zero-effort attack.
For the touch-based authentication system, the attacks performed even better, as they increased the system\u27s mean EER by between 338.8% and 1535.6% depending on parameters such as the failure-to-enroll threshold and the type of touch gesture subjected to attack. For both keystroke and touch-based authentication, we found that there was a small proportion of users who saw considerably greater performance degradation than others as a result of the attack. There was also a sub-set of users who were completely immune to the attacks.
Our work exposes a previously unexplored weakness of keystroke and touch-based authentication and opens the door to the design of behavioral biometric systems which are resistant to statistical attacks
Statistical meta-analysis of presentation attacks for secure multibiometric systems
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks. This allows us to perform a thorough security evaluation of multibiometric systems against presentation attacks, quantifying how their vulnerability may vary also under attacks that are different from those considered during design, through an uncertainty analysis. We empirically show that our approach can reliably predict the performance of multibiometric systems even under never-before-seen face and fingerprint presentation attacks, and that the secure fusion rules designed using our approach can exhibit an improved trade-off between the performance in the absence and in the presence of attack. We finally argue that our method can be extended to other biometrics besides faces and fingerprints
- …