132,204 research outputs found
Attack and defence in cellular decision-making: lessons from machine learning
Machine learning algorithms can be fooled by small well-designed adversarial
perturbations. This is reminiscent of cellular decision-making where ligands
(called antagonists) prevent correct signalling, like in early immune
recognition. We draw a formal analogy between neural networks used in machine
learning and models of cellular decision-making (adaptive proofreading). We
apply attacks from machine learning to simple decision-making models, and show
explicitly the correspondence to antagonism by weakly bound ligands. Such
antagonism is absent in more nonlinear models, which inspired us to implement a
biomimetic defence in neural networks filtering out adversarial perturbations.
We then apply a gradient-descent approach from machine learning to different
cellular decision-making models, and we reveal the existence of two regimes
characterized by the presence or absence of a critical point for the gradient.
This critical point causes the strongest antagonists to lie close to the
decision boundary. This is validated in the loss landscapes of robust neural
networks and cellular decision-making models, and observed experimentally for
immune cells. For both regimes, we explain how associated defence mechanisms
shape the geometry of the loss landscape, and why different adversarial attacks
are effective in different regimes. Our work connects evolved cellular
decision-making to machine learning, and motivates the design of a general
theory of adversarial perturbations, both for in vivo and in silico systems
Are Accuracy and Robustness Correlated?
Machine learning models are vulnerable to adversarial examples formed by
applying small carefully chosen perturbations to inputs that cause unexpected
classification errors. In this paper, we perform experiments on various
adversarial example generation approaches with multiple deep convolutional
neural networks including Residual Networks, the best performing models on
ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the
adversarial example generation techniques with respect to the quality of the
produced images, and measure the robustness of the tested machine learning
models to adversarial examples. Finally, we conduct large-scale experiments on
cross-model adversarial portability. We find that adversarial examples are
mostly transferable across similar network topologies, and we demonstrate that
better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201
- …
