1,344 research outputs found
Face Image and Video Analysis in Biometrics and Health Applications
Computer Vision (CV) enables computers and systems to derive meaningful information from acquired visual inputs, such as images and videos, and make decisions based on the extracted information. Its goal is to acquire, process, analyze, and understand the information by developing a theoretical and algorithmic model. Biometrics are distinctive and measurable human characteristics used to label or describe individuals by combining computer vision with knowledge of human physiology (e.g., face, iris, fingerprint) and behavior (e.g., gait, gaze, voice). Face is one of the most informative biometric traits. Many studies have investigated the human face from the perspectives of various different disciplines, ranging from computer vision, deep learning, to neuroscience and biometrics. In this work, we analyze the face characteristics from digital images and videos in the areas of morphing attack and defense, and autism diagnosis. For face morphing attacks generation, we proposed a transformer based generative adversarial network to generate more visually realistic morphing attacks by combining different losses, such as face matching distance, facial landmark based loss, perceptual loss and pixel-wise mean square error. In face morphing attack detection study, we designed a fusion-based few-shot learning (FSL) method to learn discriminative features from face images for few-shot morphing attack detection (FS-MAD), and extend the current binary detection into multiclass classification, namely, few-shot morphing attack fingerprinting (FS-MAF). In the autism diagnosis study, we developed a discriminative few shot learning method to analyze hour-long video data and explored the fusion of facial dynamics for facial trait classification of autism spectrum disorder (ASD) in three severity levels. The results show outstanding performance of the proposed fusion-based few-shot framework on the dataset. Besides, we further explored the possibility of performing face micro- expression spotting and feature analysis on autism video data to classify ASD and control groups. The results indicate the effectiveness of subtle facial expression changes on autism diagnosis
Deep Composite Face Image Attacks: Generation, Vulnerability and Detection
Face manipulation attacks have drawn the attention of biometric researchers
because of their vulnerability to Face Recognition Systems (FRS). This paper
proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based
on the Generative Adversarial Networks (GANs). Given the face images from
contributory data subjects, the proposed CFIA method will independently
generate the segmented facial attributes, then blend them using transparent
masks to generate the CFIA samples. { The primary motivation for CFIA is to
utilize deep learning to generate facial attribute-based composite attacks,
which has been explored relatively less in the current literature.} We generate
different combinations of facial attributes resulting in unique CFIA
samples for each pair of contributory data subjects. Extensive experiments are
carried out on our newly generated CFIA dataset consisting of 1000 unique
identities with 2000 bona fide samples and 14000 CFIA samples, thus resulting
in an overall 16000 face image samples. We perform a sequence of experiments to
benchmark the vulnerability of CFIA to automatic FRS (based on both
deep-learning and commercial-off-the-shelf (COTS). We introduced a new metric
named Generalized Morphing Attack Potential (GMAP) to benchmark the
vulnerability effectively. Additional experiments are performed to compute the
perceptual quality of the generated CFIA samples. Finally, the CFIA detection
performance is presented using three different Face Morphing Attack Detection
(MAD) algorithms. The proposed CFIA method indicates good perceptual quality
based on the obtained results. Further, { FRS is vulnerable to CFIA} (much
higher than SOTA), making it difficult to detect by human observers and
automatic detection algorithms. Lastly, we performed experiments to detect the
CFIA samples using three different detection techniques automatically
Evading Classifiers by Morphing in the Dark
Learning-based systems have been shown to be vulnerable to evasion through
adversarial data manipulation. These attacks have been studied under
assumptions that the adversary has certain knowledge of either the target model
internals, its training dataset or at least classification scores it assigns to
input samples. In this paper, we investigate a much more constrained and
realistic attack scenario wherein the target classifier is minimally exposed to
the adversary, revealing on its final classification decision (e.g., reject or
accept an input sample). Moreover, the adversary can only manipulate malicious
samples using a blackbox morpher. That is, the adversary has to evade the
target classifier by morphing malicious samples "in the dark". We present a
scoring mechanism that can assign a real-value score which reflects evasion
progress to each sample based on the limited information available. Leveraging
on such scoring mechanism, we propose an evasion method -- EvadeHC -- and
evaluate it against two PDF malware detectors, namely PDFRate and Hidost. The
experimental evaluation demonstrates that the proposed evasion attacks are
effective, attaining evasion rate on the evaluation dataset.
Interestingly, EvadeHC outperforms the known classifier evasion technique that
operates based on classification scores output by the classifiers. Although our
evaluations are conducted on PDF malware classifier, the proposed approaches
are domain-agnostic and is of wider application to other learning-based
systems
Vulnerability of face recognition to morphing:a latent space perspective
Face recognition plays an important role in modern society. We depend on it when we travel through airports that use automated border control systems (eGates), unlock our phones with it, and police use it to find criminals in their databases. While its role in public surveillance is currently hotly debated, it is in some cases already being used. Understanding the algorithms used and their potential weaknesses is therefore very relevant.A face recognition system verifies the identity of an individual by comparing two facial images and deciding whether or not they match, i.e. belong to the same person. Two images of different persons - person A and person B - can be mixed to create a morph. Face recognition systems often accept such a morph as a match with images of A, but also with images of B, leading to potential security issues. For example, if someone were to apply for an ID document using a morphed passport photo this could enable two people to travel using the same ID.Our approach to the problem of morphing attacks is to analyse the effect of morphing in the feature spaces of face recognition systems in order to better understand why they are vulnerable to morphing attacks. Our aim is to develop methods that not only decrease vulnerability to known types of morphing attacks, but also to unknown attacks. First, we introduce worst-case morphs, which allow us to understand the theoretical vulnerability of face recognition systems. Exploiting information from embedding spaces of face recognition systems on the one hand allowed us to approximate worst-case morphs and on the other hand inspired an approach for face de-identification.Second, one reason face recognition systems are vulnerable to morphing attacks is because they were not trained with morphed images and were simply not developed to deal with images that contain identity information from more than one person. In fact, the better a system is at distinguishing normal facial images from one another, the more vulnerable it is to morphing attacks. We explain why this trade-off happens and suggest improvements to reduce vulnerability to morphing attacks.<br/
Fused Classification For Differential Face Morphing Detection
Face morphing, a sophisticated presentation attack technique, poses
significant security risks to face recognition systems. Traditional methods
struggle to detect morphing attacks, which involve blending multiple face
images to create a synthetic image that can match different individuals. In
this paper, we focus on the differential detection of face morphing and propose
an extended approach based on fused classification method for no-reference
scenario. We introduce a public face morphing detection benchmark for the
differential scenario and utilize a specific data mining technique to enhance
the performance of our approach. Experimental results demonstrate the
effectiveness of our method in detecting morphing attacks.Comment: 8 pages, 3 figures, 2 table
Fusion-based Few-Shot Morphing Attack Detection and Fingerprinting
The vulnerability of face recognition systems to morphing attacks has posed a
serious security threat due to the wide adoption of face biometrics in the real
world. Most existing morphing attack detection (MAD) methods require a large
amount of training data and have only been tested on a few predefined attack
models. The lack of good generalization properties, especially in view of the
growing interest in developing novel morphing attacks, is a critical limitation
with existing MAD research. To address this issue, we propose to extend MAD
from supervised learning to few-shot learning and from binary detection to
multiclass fingerprinting in this paper. Our technical contributions include:
1) We propose a fusion-based few-shot learning (FSL) method to learn
discriminative features that can generalize to unseen morphing attack types
from predefined presentation attacks; 2) The proposed FSL based on the fusion
of the PRNU model and Noiseprint network is extended from binary MAD to
multiclass morphing attack fingerprinting (MAF). 3) We have collected a
large-scale database, which contains five face datasets and eight different
morphing algorithms, to benchmark the proposed few-shot MAF (FS-MAF) method.
Extensive experimental results show the outstanding performance of our
fusion-based FS-MAF. The code and data will be publicly available at
https://github.com/nz0001na/mad maf
Stegano-Morphing: Concealing Attacks on Face Identification Algorithms
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksFace identification is becoming a well-accepted technology for access control applications, both in the real or virtual world. Systems based on this technology must deal with the persistent challenges of classification algorithms and the impersonation attacks performed by people who do not want to be identified. Morphing is often selected to conduct such attacks since it allows the modification of the features of an original subject's image to make it appear as someone else. Publications focus on impersonating this other person, usually someone who is allowed to get into a restricted place, building, or software app. However, there is no list of authorized people in many other applications, just a blacklist of people no longer allowed to enter, log in, or register. In such cases, the morphing target person is not relevant, and the main objective is to minimize the probability of being detected. In this paper, we present a comparison of the identification rate and behavior of six recognizers (Eigenfaces, Fisherfaces, LBPH, SIFT, FaceNet, and ArcFace) against traditional morphing attacks, in which only two subjects are used to create the altered image: the original subject and the target. We also present a new morphing method that works as an iterative process of gradual traditional morphing, combining the original subject with all the subjects' images in a database. This method multiplies by four the chances of a successful and complete impersonation attack (from 4% to 16%), by deceiving both face identification and morphing detection algorithms simultaneouslyThis work was supported by the ConsejerÃa De Ciencia, Universidad e Innovación, Comunidad de Madri
3D Face Morphing Attacks: Generation, Vulnerability and Detection
Face Recognition systems (FRS) have been found to be vulnerable to morphing
attacks, where the morphed face image is generated by blending the face images
from contributory data subjects. This work presents a novel direction for
generating face-morphing attacks in 3D. To this extent, we introduced a novel
approach based on blending 3D face point clouds corresponding to contributory
data subjects. The proposed method generates 3D face morphing by projecting the
input 3D face point clouds onto depth maps and 2D color images, followed by
image blending and wrapping operations performed independently on the color
images and depth maps. We then back-projected the 2D morphing color map and the
depth map to the point cloud using the canonical (fixed) view. Given that the
generated 3D face morphing models will result in holes owing to a single
canonical view, we have proposed a new algorithm for hole filling that will
result in a high-quality 3D face morphing model. Extensive experiments were
conducted on the newly generated 3D face dataset comprising 675 3D scans
corresponding to 41 unique data subjects and a publicly available database
(Facescape) with 100 data subjects. Experiments were performed to benchmark the
vulnerability of the {proposed 3D morph-generation scheme against} automatic
2D, 3D FRS, and human observer analysis. We also presented a quantitative
assessment of the quality of the generated 3D face-morphing models using eight
different quality metrics. Finally, we propose three different 3D face Morphing
Attack Detection (3D-MAD) algorithms to benchmark the performance of 3D face
morphing attack detection techniques.Comment: The paper is accepted at IEEE Transactions on Biometrics, Behavior
and Identity Scienc
- …