217 research outputs found
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Backdoor attacks against CNNs represent a new threat against deep learning
systems, due to the possibility of corrupting the training set so to induce an
incorrect behaviour at test time. To avoid that the trainer recognises the
presence of the corrupted samples, the corruption of the training set must be
as stealthy as possible. Previous works have focused on the stealthiness of the
perturbation injected into the training samples, however they all assume that
the labels of the corrupted samples are also poisoned. This greatly reduces the
stealthiness of the attack, since samples whose content does not agree with the
label can be identified by visual inspection of the training set or by running
a pre-classification step. In this paper we present a new backdoor attack
without label poisoning Since the attack works by corrupting only samples of
the target class, it has the additional advantage that it does not need to
identify beforehand the class of the samples to be attacked at test time.
Results obtained on the MNIST digits recognition task and the traffic signs
classification task show that backdoor attacks without label poisoning are
indeed possible, thus raising a new alarm regarding the use of deep learning in
security-critical applications
A Message Passing Approach for Decision Fusion in Adversarial Multi-Sensor Networks
We consider a simple, yet widely studied, set-up in which a Fusion Center
(FC) is asked to make a binary decision about a sequence of system states by
relying on the possibly corrupted decisions provided by byzantine nodes, i.e.
nodes which deliberately alter the result of the local decision to induce an
error at the fusion center. When independent states are considered, the optimum
fusion rule over a batch of observations has already been derived, however its
complexity prevents its use in conjunction with large observation windows.
In this paper, we propose a near-optimal algorithm based on message passing
that greatly reduces the computational burden of the optimum fusion rule. In
addition, the proposed algorithm retains very good performance also in the case
of dependent system states. By first focusing on the case of small observation
windows, we use numerical simulations to show that the proposed scheme
introduces a negligible increase of the decision error probability compared to
the optimum fusion rule. We then analyse the performance of the new scheme when
the FC make its decision by relying on long observation windows. We do so by
considering both the case of independent and Markovian system states and show
that the obtained performance are superior to those obtained with prior
suboptimal schemes. As an additional result, we confirm the previous finding
that, in some cases, it is preferable for the byzantine nodes to minimise the
mutual information between the sequence system states and the reports submitted
to the FC, rather than always flipping the local decision
Attacking and Defending Printer Source Attribution Classifiers in the Physical Domain
The security of machine learning classifiers has received increasing attention in the last years. In forensic applications, guaranteeing
the security of the tools investigators rely on is crucial, since the gathered evidence may be used to decide about the innocence or the guilt
of a suspect. Several adversarial attacks were proposed to assess such
security, with a few works focusing on transferring such attacks from the
digital to the physical domain. In this work, we focus on physical domain
attacks against source attribution of printed documents. We first show
how a simple reprinting attack may be sufficient to fool a model trained
on images that were printed and scanned only once. Then, we propose
a hardened version of the classifier trained on the reprinted attacked
images. Finally, we attack the hardened classifier with several attacks,
including a new attack based on the Expectation Over Transformation
approach, which finds the adversarial perturbations by simulating the
physical transformations occurring when the image attacked in the digital domain is printed again. The results we got demonstrate a good
capability of the hardened classifier to resist attacks carried out in the
physical domai
Compressive Hyperspectral Imaging Using Progressive Total Variation
Compressed Sensing (CS) is suitable for remote acquisition of hyperspectral
images for earth observation, since it could exploit the strong spatial and
spectral correlations, llowing to simplify the architecture of the onboard
sensors. Solutions proposed so far tend to decouple spatial and spectral
dimensions to reduce the complexity of the reconstruction, not taking into
account that onboard sensors progressively acquire spectral rows rather than
acquiring spectral channels. For this reason, we propose a novel progressive CS
architecture based on separate sensing of spectral rows and joint
reconstruction employing Total Variation. Experimental results run on raw
AVIRIS and AIRS images confirm the validity of the proposed system.Comment: To be published on ICASSP 2014 proceeding
- …