60 research outputs found
Synthesizing Normalized Faces from Facial Identity Features
We present a method for synthesizing a frontal, neutral-expression image of a
person's face given an input face photograph. This is achieved by learning to
generate facial landmarks and textures from features extracted from a
facial-recognition network. Unlike previous approaches, our encoding feature
vector is largely invariant to lighting, pose, and facial expression.
Exploiting this invariance, we train our decoder network using only frontal,
neutral-expression photographs. Since these photographs are well aligned, we
can decompose them into a sparse set of landmark points and aligned texture
maps. The decoder then predicts landmarks and textures independently and
combines them using a differentiable image warping operation. The resulting
images can be used for a number of applications, such as analyzing facial
attributes, exposure and white balance adjustment, or creating a 3-D avatar
Morph Creation and Vulnerability of Face Recognition Systems to Morphing
Face recognition in controlled environments is nowadays considered rather reliable, and very good accuracy levels can be achieved by state-of-the-art systems in controlled scenarios. However, even under these desirable conditions, digital image alterations can severely affect the recognition performance. In particular, several studies show that automatic face recognition systems are very sensitive to the so-called face morphing attack, where face images of two individuals are mixed to produce a new face image containing facial features of both subjects. Face morphing represents nowadays a big security threat particularly in the context of electronic identity documents because it can be successfully exploited for criminal intents, for instance to fool Automated Border Control (ABC) systems thus overcoming security controls at the borders. This chapter will describe the face morphing process, in an overview ranging from the traditional techniques based on geometry warping and texture blending to the most recent and innovative approaches based on deep neural networks. Moreover, the sensitivity of state-of-the-art face recognition algorithms to the face morphing attack will be assessed using morphed images of different quality generated using various morphing methods to identify possible factors influencing the probability of success of the attack
Differential Newborn Face Morphing Attack Detection using Wavelet Scatter Network
Face Recognition System (FRS) are shown to be vulnerable to morphed images of
newborns. Detecting morphing attacks stemming from face images of newborn is
important to avoid unwanted consequences, both for security and society. In
this paper, we present a new reference-based/Differential Morphing Attack
Detection (MAD) method to detect newborn morphing images using Wavelet
Scattering Network (WSN). We propose a two-layer WSN with 250 250
pixels and six rotations of wavelets per layer, resulting in 577 paths. The
proposed approach is validated on a dataset of 852 bona fide images and 2460
morphing images constructed using face images of 42 unique newborns. The
obtained results indicate a gain of over 10\% in detection accuracy over other
existing D-MAD techniques.Comment: accepted in 5th International Conference on Bio-engineering for Smart
Technologies (BIO-SMART 2023
On the Influence of Ageing on Face Morph Attacks: Vulnerability and Detection
Face morphing attacks have raised critical concerns as they demonstrate a new
vulnerability of Face Recognition Systems (FRS), which are widely deployed in
border control applications. The face morphing process uses the images from
multiple data subjects and performs an image blending operation to generate a
morphed image of high quality. The generated morphed image exhibits similar
visual characteristics corresponding to the biometric characteristics of the
data subjects that contributed to the composite image and thus making it
difficult for both humans and FRS, to detect such attacks. In this paper, we
report a systematic investigation on the vulnerability of the
Commercial-Off-The-Shelf (COTS) FRS when morphed images under the influence of
ageing are presented. To this extent, we have introduced a new morphed face
dataset with ageing derived from the publicly available MORPH II face dataset,
which we refer to as MorphAge dataset. The dataset has two bins based on age
intervals, the first bin - MorphAge-I dataset has 1002 unique data subjects
with the age variation of 1 year to 2 years while the MorphAge-II dataset
consists of 516 data subjects whose age intervals are from 2 years to 5 years.
To effectively evaluate the vulnerability for morphing attacks, we also
introduce a new evaluation metric, namely the Fully Mated Morphed Presentation
Match Rate (FMMPMR), to quantify the vulnerability effectively in a realistic
scenario. Extensive experiments are carried out by using two different COTS FRS
(COTS I - Cognitec and COTS II - Neurotechnology) to quantify the vulnerability
with ageing. Further, we also evaluate five different Morph Attack Detection
(MAD) techniques to benchmark their detection performance with ageing.Comment: Accepted in IJCB 202
Real vs Fake Faces: DeepFakes and Face Morphing
The ability to determine the legitimacy of a person’s face in images and video can be important for many applications ranging from social media to border security. From a biometrics perspective, altering one’s appearance to look like a target identity is a direct method of attack against the security of facial recognition systems. Defending against such attacks requires the ability to recognize them as a separate identity from their target. Alternatively, a forensics perspective may view this as a forgery of digital media. Detecting such forgeries requires the ability to detect artifacts not commonly seen in genuine media. This work examines two cases where we can classify faces as real or fake within digital media and explores them from the perspective of the attacker and defender.
First, we will explore the role of the defender by examining how deepfakes can be distinguished from legitimate videos. The most common form of deepfakes are videos which have had the face of one person swapped with another, sometimes referred to as “face-swaps.” These are generated using Generative Adversarial Networks (GANs) to produce realistic augmented media with few artifacts noticeable to human observers. This work shows how facial expression data can be extracted from deepfakes and legitimate videos to train a machine learning model to detect these forgeries.
Second, we will explore the role of the attacker by examining a problem of increasing importance to border security. Face morphing is the process by which two or more peoples’ facial features may be combined in one image. We will examine the process by which this can be done using GANs, and traditional image processing methods in tandem with machine learning models. Additionally, we will evaluate their effectiveness at fooling facial recognition systems
- …