8,886 research outputs found
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
In this work, we investigate the concept of biometric backdoors: a template
poisoning attack on biometric systems that allows adversaries to stealthily and
effortlessly impersonate users in the long-term by exploiting the template
update procedure. We show that such attacks can be carried out even by
attackers with physical limitations (no digital access to the sensor) and zero
knowledge of training data (they know neither decision boundaries nor user
template). Based on the adversaries' own templates, they craft several
intermediate samples that incrementally bridge the distance between their own
template and the legitimate user's. As these adversarial samples are added to
the template, the attacker is eventually accepted alongside the legitimate
user. To avoid detection, we design the attack to minimize the number of
rejected samples.
We design our method to cope with the weak assumptions for the attacker and
we evaluate the effectiveness of this approach on state-of-the-art face
recognition pipelines based on deep neural networks. We find that in scenarios
where the deep network is known, adversaries can successfully carry out the
attack over 70% of cases with less than ten injection attempts. Even in
black-box scenarios, we find that exploiting the transferability of adversarial
samples from surrogate models can lead to successful attacks in around 15% of
cases. Finally, we design a poisoning detection technique that leverages the
consistent directionality of template updates in feature space to discriminate
between legitimate and malicious updates. We evaluate such a countermeasure
with a set of intra-user variability factors which may present the same
directionality characteristics, obtaining equal error rates for the detection
between 7-14% and leading to over 99% of attacks being detected after only two
sample injections.Comment: 12 page
Hand2Face: Automatic Synthesis and Recognition of Hand Over Face Occlusions
A person's face discloses important information about their affective state.
Although there has been extensive research on recognition of facial
expressions, the performance of existing approaches is challenged by facial
occlusions. Facial occlusions are often treated as noise and discarded in
recognition of affective states. However, hand over face occlusions can provide
additional information for recognition of some affective states such as
curiosity, frustration and boredom. One of the reasons that this problem has
not gained attention is the lack of naturalistic occluded faces that contain
hand over face occlusions as well as other types of occlusions. Traditional
approaches for obtaining affective data are time demanding and expensive, which
limits researchers in affective computing to work on small datasets. This
limitation affects the generalizability of models and deprives researchers from
taking advantage of recent advances in deep learning that have shown great
success in many fields but require large volumes of data. In this paper, we
first introduce a novel framework for synthesizing naturalistic facial
occlusions from an initial dataset of non-occluded faces and separate images of
hands, reducing the costly process of data collection and annotation. We then
propose a model for facial occlusion type recognition to differentiate between
hand over face occlusions and other types of occlusions such as scarves, hair,
glasses and objects. Finally, we present a model to localize hand over face
occlusions and identify the occluded regions of the face.Comment: Accepted to International Conference on Affective Computing and
Intelligent Interaction (ACII), 201
Wearing Many (Social) Hats: How Different are Your Different Social Network Personae?
This paper investigates when users create profiles in different social
networks, whether they are redundant expressions of the same persona, or they
are adapted to each platform. Using the personal webpages of 116,998 users on
About.me, we identify and extract matched user profiles on several major social
networks including Facebook, Twitter, LinkedIn, and Instagram. We find evidence
for distinct site-specific norms, such as differences in the language used in
the text of the profile self-description, and the kind of picture used as
profile image. By learning a model that robustly identifies the platform given
a user's profile image (0.657--0.829 AUC) or self-description (0.608--0.847
AUC), we confirm that users do adapt their behaviour to individual platforms in
an identifiable and learnable manner. However, different genders and age groups
adapt their behaviour differently from each other, and these differences are,
in general, consistent across different platforms. We show that differences in
social profile construction correspond to differences in how formal or informal
the platform is.Comment: Accepted at the 11th International AAAI Conference on Web and Social
Media (ICWSM17
Learning Residual Images for Face Attribute Manipulation
Face attributes are interesting due to their detailed description of human
faces. Unlike prior researches working on attribute prediction, we address an
inverse and more challenging problem called face attribute manipulation which
aims at modifying a face image according to a given attribute value. Instead of
manipulating the whole image, we propose to learn the corresponding residual
image defined as the difference between images before and after the
manipulation. In this way, the manipulation can be operated efficiently with
modest pixel modification. The framework of our approach is based on the
Generative Adversarial Network. It consists of two image transformation
networks and a discriminative network. The transformation networks are
responsible for the attribute manipulation and its dual operation and the
discriminative network is used to distinguish the generated images from real
images. We also apply dual learning to allow transformation networks to learn
from each other. Experiments show that residual images can be effectively
learned and used for attribute manipulations. The generated images remain most
of the details in attribute-irrelevant areas
Persistent Evidence of Local Image Properties in Generic ConvNets
Supervised training of a convolutional network for object classification
should make explicit any information related to the class of objects and
disregard any auxiliary information associated with the capture of the image or
the variation within the object class. Does this happen in practice? Although
this seems to pertain to the very final layers in the network, if we look at
earlier layers we find that this is not the case. Surprisingly, strong spatial
information is implicit. This paper addresses this, in particular, exploiting
the image representation at the first fully connected layer, i.e. the global
image descriptor which has been recently shown to be most effective in a range
of visual recognition tasks. We empirically demonstrate evidences for the
finding in the contexts of four different tasks: 2d landmark detection, 2d
object keypoints prediction, estimation of the RGB values of input image, and
recovery of semantic label of each pixel. We base our investigation on a simple
framework with ridge rigression commonly across these tasks, and show results
which all support our insight. Such spatial information can be used for
computing correspondence of landmarks to a good accuracy, but should
potentially be useful for improving the training of the convolutional nets for
classification purposes
- …