49 research outputs found
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That
is, adversarial examples, obtained by adding delicately crafted distortions
onto original legal inputs, can mislead a DNN to classify them as any target
labels. This work provides a solution to hardening DNNs under adversarial
attacks through defensive dropout. Besides using dropout during training for
the best test accuracy, we propose to use dropout also at test time to achieve
strong defense effects. We consider the problem of building robust DNNs as an
attacker-defender two-player game, where the attacker and the defender know
each others' strategies and try to optimize their own strategies towards an
equilibrium. Based on the observations of the effect of test dropout rate on
test accuracy and attack success rate, we propose a defensive dropout algorithm
to determine an optimal test dropout rate given the neural network model and
the attacker's strategy for generating adversarial examples.We also investigate
the mechanism behind the outstanding defense effects achieved by the proposed
defensive dropout. Comparing with stochastic activation pruning (SAP), another
defense method through introducing randomness into the DNN model, we find that
our defensive dropout achieves much larger variances of the gradients, which is
the key for the improved defense effects (much lower attack success rate). For
example, our defensive dropout can reduce the attack success rate from 100% to
13.89% under the currently strongest attack i.e., C&W attack on MNIST dataset.Comment: Accepted as conference paper on ICCAD 201
Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning
We consider online learning algorithms that guarantee worst-case regret rates
in adversarial environments (so they can be deployed safely and will perform
robustly), yet adapt optimally to favorable stochastic environments (so they
will perform well in a variety of settings of practical importance). We
quantify the friendliness of stochastic environments by means of the well-known
Bernstein (a.k.a. generalized Tsybakov margin) condition. For two recent
algorithms (Squint for the Hedge setting and MetaGrad for online convex
optimization) we show that the particular form of their data-dependent
individual-sequence regret guarantees implies that they adapt automatically to
the Bernstein parameters of the stochastic environment. We prove that these
algorithms attain fast rates in their respective settings both in expectation
and with high probability
Generalized Mixability via Entropic Duality
Mixability is a property of a loss which characterizes when
constant regret is possible in the game of prediction with expert
advice. We show that a key property of mixability generalizes, and
the and operations present in the usual theory are not as
special as one might have thought.
In doing so we introduce a
more general notion of -mixability where is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural
algorithm (the minimizer of a regret bound) which, analogous to the
classical Aggregating Algorithm, is guaranteed a constant regret
when used with -mixable losses.
We characterize which have non-trivial -mixable losses and
relate -mixability and its associated Aggregating
Algorithm to potential-based methods, a Blackwell-like
condition, mirror descent, and risk measures from finance.
We also define a notion of ``dominance'' between different
entropies in terms of bounds they guarantee and
conjecture that classical mixability gives optimal bounds, for which we
provide some supporting empirical evidence
The contribution of statistical physics to evolutionary biology
Evolutionary biology shares many concepts with statistical physics: both deal
with populations, whether of molecules or organisms, and both seek to simplify
evolution in very many dimensions. Often, methodologies have undergone parallel
and independent development, as with stochastic methods in population genetics.
We discuss aspects of population genetics that have embraced methods from
physics: amongst others, non-equilibrium statistical mechanics, travelling
waves, and Monte-Carlo methods have been used to study polygenic evolution,
rates of adaptation, and range expansions. These applications indicate that
evolutionary biology can further benefit from interactions with other areas of
statistical physics, for example, by following the distribution of paths taken
by a population through time.Comment: 18 pages, 3 figures, glossary. Accepted in Trend in Ecology and
Evolution (to appear in print in August 2011