26 research outputs found
Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks
Despite the great achievements of deep neural networks (DNNs), the
vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many
application domains requiring high reliability.We propose the fault sneaking
attack on DNNs, where the adversary aims to misclassify certain input images
into any target labels by modifying the DNN parameters. We apply ADMM
(alternating direction method of multipliers) for solving the optimization
problem of the fault sneaking attack with two constraints: 1) the
classification of the other images should be unchanged and 2) the parameter
modifications should be minimized. Specifically, the first constraint requires
us not only to inject designated faults (misclassifications), but also to hide
the faults for stealthy or sneaking considerations by maintaining model
accuracy. The second constraint requires us to minimize the parameter
modifications (using L0 norm to measure the number of modifications and L2 norm
to measure the magnitude of modifications). Comprehensive experimental
evaluation demonstrates that the proposed framework can inject multiple
sneaking faults without losing the overall test accuracy performance.Comment: Accepted by the 56th Design Automation Conference (DAC 2019
Model Extraction Warning in MLaaS Paradigm
Cloud vendors are increasingly offering machine learning services as part of
their platform and services portfolios. These services enable the deployment of
machine learning models on the cloud that are offered on a pay-per-query basis
to application developers and end users. However recent work has shown that the
hosted models are susceptible to extraction attacks. Adversaries may launch
queries to steal the model and compromise future query payments or privacy of
the training data. In this work, we present a cloud-based extraction monitor
that can quantify the extraction status of models by observing the query and
response streams of both individual and colluding adversarial users. We present
a novel technique that uses information gain to measure the model learning rate
by users with increasing number of queries. Additionally, we present an
alternate technique that maintains intelligent query summaries to measure the
learning rate relative to the coverage of the input feature space in the
presence of collusion. Both these approaches have low computational overhead
and can easily be offered as services to model owners to warn them of possible
extraction attacks from adversaries. We present performance results for these
approaches for decision tree models deployed on BigML MLaaS platform, using
open source datasets and different adversarial attack strategies
Neural Network Model Extraction Attacks in Edge Devices by Hearing Architectural Hints
As neural networks continue their reach into nearly every aspect of software
operations, the details of those networks become an increasingly sensitive
subject. Even those that deploy neural networks embedded in physical devices
may wish to keep the inner working of their designs hidden -- either to protect
their intellectual property or as a form of protection from adversarial inputs.
The specific problem we address is how, through heavy system stack, given noisy
and imperfect memory traces, one might reconstruct the neural network
architecture including the set of layers employed, their connectivity, and
their respective dimension sizes. Considering both the intra-layer architecture
features and the inter-layer temporal association information introduced by the
DNN design empirical experience, we draw upon ideas from speech recognition to
solve this problem. We show that off-chip memory address traces and PCIe events
provide ample information to reconstruct such neural network architectures
accurately. We are the first to propose such accurate model extraction
techniques and demonstrate an end-to-end attack experimentally in the context
of an off-the-shelf Nvidia GPU platform with full system stack. Results show
that the proposed techniques achieve a high reverse engineering accuracy and
improve the one's ability to conduct targeted adversarial attack with success
rate from 14.6\%25.5\% (without network architecture knowledge) to 75.9\%
(with extracted network architecture)
On the vulnerability of data-driven structural health monitoring models to adversarial attack
Recommended from our members
Architectural Backdoors in Neural Networks
Machine learning is vulnerable to adversarial manipulation. Previous literature demonstrated that at the training stage attackers can manipulate data [14] and data sampling procedures [29] to control model behaviour. A common attack goal is to plant backdoors i.e. force the victim model to learn to recognise a trigger known only by the adversary. In this paper, we introduce a new class of backdoor attacks that hide inside model architectures i.e. in the inductive bias of the functions used to train. These backdoors are simple to implement, for instance by publishing open-source code for a backdoored model architecture that others will reuse unknowingly. We demonstrate that model architectural backdoors represent a real threat and, unlike other approaches, can survive a complete re-training from scratch. We formalise the main construction principles behind architectural backdoors, such as a connection between the input and the output, and describe some possible protections against them. We evaluate our attacks on computer vision benchmarks of different scales and demonstrate the underlying vulnerability is pervasive in a variety of common training settings
Valutazione preliminare del ruolo del valore di mesotelina sierica nel monitoraggio di nodularita' polmonari in lavoratori esposti ad amianto sottoposti ad accertamenti preventivi
il lavoro di tesi consiste nella valutazione preliminare del ruolo della mesotelina sierica come marcatore precoce di noduli polmonari non Tac specifici ottenuto confrontando l'aumento volumetrico e numerico di tali nodularita'con il valore della mesotelina sieric