39 research outputs found
List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians
We study the problem of list-decodable Gaussian mean estimation and the
related problem of learning mixtures of separated spherical Gaussians. We
develop a set of techniques that yield new efficient algorithms with
significantly improved guarantees for these problems.
{\bf List-Decodable Mean Estimation.} Fix any and . We design an algorithm with runtime that outputs a list of many
candidate vectors such that with high probability one of the candidates is
within -distance from the true mean. The only
previous algorithm for this problem achieved error
under second moment conditions. For , our algorithm runs in
polynomial time and achieves error . We also give a
Statistical Query lower bound suggesting that the complexity of our algorithm
is qualitatively close to best possible.
{\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm
for mixtures of spherical Gaussians that succeeds under significantly weaker
separation assumptions compared to prior work. For the prototypical case of a
uniform mixture of identity covariance Gaussians we obtain: For any
, if the pairwise separation between the means is at least
, our algorithm learns the unknown
parameters within accuracy with sample complexity and running time
. The previously best
known polynomial time algorithm required separation at least .
Our main technical contribution is a new technique, using degree-
multivariate polynomials, to remove outliers from high-dimensional datasets
where the majority of the points are corrupted
The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure
Many modern machine learning classifiers are shown to be vulnerable to
adversarial perturbations of the instances. Despite a massive amount of work
focusing on making classifiers robust, the task seems quite challenging. In
this work, through a theoretical study, we investigate the adversarial risk and
robustness of classifiers and draw a connection to the well-known phenomenon of
concentration of measure in metric measure spaces. We show that if the metric
probability space of the test instance is concentrated, any classifier with
some initial constant error is inherently vulnerable to adversarial
perturbations.
One class of concentrated metric probability spaces are the so-called Levy
families that include many natural distributions. In this special case, our
attacks only need to perturb the test instance by at most to make
it misclassified, where is the data dimension. Using our general result
about Levy instance spaces, we first recover as special case some of the
previously proved results about the existence of adversarial examples. However,
many more Levy families are known (e.g., product distribution under the Hamming
distance) for which we immediately obtain new attacks that find adversarial
examples of distance .
Finally, we show that concentration of measure for product spaces implies the
existence of forms of "poisoning" attacks in which the adversary tampers with
the training data with the goal of degrading the classifier. In particular, we
show that for any learning algorithm that uses training examples, there is
an adversary who can increase the probability of any "bad property" (e.g.,
failing on a particular test instance) that initially happens with
non-negligible probability to by substituting only of the examples with other (still correctly labeled) examples