1,404 research outputs found
CALIPER: Continuous Authentication Layered with Integrated PKI Encoding Recognition
Architectures relying on continuous authentication require a secure way to
challenge the user's identity without trusting that the Continuous
Authentication Subsystem (CAS) has not been compromised, i.e., that the
response to the layer which manages service/application access is not fake. In
this paper, we introduce the CALIPER protocol, in which a separate Continuous
Access Verification Entity (CAVE) directly challenges the user's identity in a
continuous authentication regime. Instead of simply returning authentication
probabilities or confidence scores, CALIPER's CAS uses live hard and soft
biometric samples from the user to extract a cryptographic private key embedded
in a challenge posed by the CAVE. The CAS then uses this key to sign a response
to the CAVE. CALIPER supports multiple modalities, key lengths, and security
levels and can be applied in two scenarios: One where the CAS must authenticate
its user to a CAVE running on a remote server (device-server) for access to
remote application data, and another where the CAS must authenticate its user
to a locally running trusted computing module (TCM) for access to local
application data (device-TCM). We further demonstrate that CALIPER can leverage
device hardware resources to enable privacy and security even when the device's
kernel is compromised, and we show how this authentication protocol can even be
expanded to obfuscate direct kernel object manipulation (DKOM) malwares.Comment: Accepted to CVPR 2016 Biometrics Worksho
Are Accuracy and Robustness Correlated?
Machine learning models are vulnerable to adversarial examples formed by
applying small carefully chosen perturbations to inputs that cause unexpected
classification errors. In this paper, we perform experiments on various
adversarial example generation approaches with multiple deep convolutional
neural networks including Residual Networks, the best performing models on
ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the
adversarial example generation techniques with respect to the quality of the
produced images, and measure the robustness of the tested machine learning
models to adversarial examples. Finally, we conduct large-scale experiments on
cross-model adversarial portability. We find that adversarial examples are
mostly transferable across similar network topologies, and we demonstrate that
better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201
Pyrolysis and combustion of foamed polyurethanes
Imperial Users onl
Adversarial Robustness: Softmax versus Openmax
Deep neural networks (DNNs) provide state-of-the-art results on various tasks
and are widely used in real world applications. However, it was discovered that
machine learning models, including the best performing DNNs, suffer from a
fundamental problem: they can unexpectedly and confidently misclassify examples
formed by slightly perturbing otherwise correctly recognized inputs. Various
approaches have been developed for efficiently generating these so-called
adversarial examples, but those mostly rely on ascending the gradient of loss.
In this paper, we introduce the novel logits optimized targeting system (LOTS)
to directly manipulate deep features captured at the penultimate layer. Using
LOTS, we analyze and compare the adversarial robustness of DNNs using the
traditional Softmax layer with Openmax, which was designed to provide open set
recognition by defining classes derived from deep representations, and is
claimed to be more robust to adversarial perturbations. We demonstrate that
Openmax provides less vulnerable systems than Softmax to traditional attacks,
however, we show that it can be equally susceptible to more sophisticated
adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201
Adversarial Diversity and Hard Positive Generation
State-of-the-art deep neural networks suffer from a fundamental problem -
they misclassify adversarial examples formed by applying small perturbations to
inputs. In this paper, we present a new psychometric perceptual adversarial
similarity score (PASS) measure for quantifying adversarial images, introduce
the notion of hard positive generation, and use a diverse set of adversarial
perturbations - not just the closest ones - for data augmentation. We introduce
a novel hot/cold approach for adversarial example generation, which provides
multiple possible adversarial perturbations for every single image. The
perturbations generated by our novel approach often correspond to semantically
meaningful image structures, and allow greater flexibility to scale
perturbation-amplitudes, which yields an increased diversity of adversarial
images. We present adversarial images on several network topologies and
datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet
on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that
fine-tuning with a diverse set of hard positives improves the robustness of
these networks compared to training with prior methods of generating
adversarial images.Comment: Accepted to CVPR 2016 DeepVision Worksho
Excusing Prospective Agents
Blameless norm violation in young children is an underexplored phenomenon in epistemology. An understanding of it is important for accounting for the full range of normative standings at issue in debates about epistemic norms, and the internalism-externalism debate generally. More specifically, it is important for proponents of factive epistemic norms. I examine this phenomenon and put forward a positive proposal. I claim that we should think of the normative dimension of certain actions and attitudes of young children in terms of a kind of “prospective agency”. I argue that the most sophisticated account of exculpatory defenses in epistemology – due to Clayton Littlejohn – does not provide an adequate model for exculpatory defenses of prospective agents. The aim is not primarily to challenge Littlejohn. Rather, I engage with his framework as a way of setting up my positive proposal. I call it the “heuristic model”
- …