7,721 research outputs found
ECGadv: Generating Adversarial Electrocardiogram to Misguide Arrhythmia Classification System
Deep neural networks (DNNs)-powered Electrocardiogram (ECG) diagnosis systems
recently achieve promising progress to take over tedious examinations by
cardiologists. However, their vulnerability to adversarial attacks still lack
comprehensive investigation. The existing attacks in image domain could not be
directly applicable due to the distinct properties of ECGs in visualization and
dynamic properties. Thus, this paper takes a step to thoroughly explore
adversarial attacks on the DNN-powered ECG diagnosis system. We analyze the
properties of ECGs to design effective attacks schemes under two attacks models
respectively. Our results demonstrate the blind spots of DNN-powered diagnosis
systems under adversarial attacks, which calls attention to adequate
countermeasures.Comment: Accepted by AAAI 202
Towards Robust Neural Networks via Random Self-ensemble
Recent studies have revealed the vulnerability of deep neural networks: A
small adversarial perturbation that is imperceptible to human can easily make a
well-trained deep neural network misclassify. This makes it unsafe to apply
neural networks in security-critical applications. In this paper, we propose a
new defense algorithm called Random Self-Ensemble (RSE) by combining two
important concepts: {\bf randomness} and {\bf ensemble}. To protect a targeted
model, RSE adds random noise layers to the neural network to prevent the strong
gradient-based attacks, and ensembles the prediction over random noises to
stabilize the performance. We show that our algorithm is equivalent to ensemble
an infinite number of noisy models without any additional memory
overhead, and the proposed training procedure based on noisy stochastic
gradient descent can ensure the ensemble model has a good predictive
capability. Our algorithm significantly outperforms previous defense techniques
on real data sets. For instance, on CIFAR-10 with VGG network (which has 92\%
accuracy without any attack), under the strong C\&W attack within a certain
distortion tolerance, the accuracy of unprotected model drops to less than
10\%, the best previous defense technique has accuracy, while our method
still has prediction accuracy under the same level of attack. Finally,
our method is simple and easy to integrate into any neural network.Comment: ECCV 2018 camera read
- …