25,532 research outputs found
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
Towards explaining the speed of -means
The -means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the -means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization to Bregman divergences
Smoothed Efficient Algorithms and Reductions for Network Coordination Games
Worst-case hardness results for most equilibrium computation problems have
raised the need for beyond-worst-case analysis. To this end, we study the
smoothed complexity of finding pure Nash equilibria in Network Coordination
Games, a PLS-complete problem in the worst case. This is a potential game where
the sequential-better-response algorithm is known to converge to a pure NE,
albeit in exponential time. First, we prove polynomial (resp. quasi-polynomial)
smoothed complexity when the underlying game graph is a complete (resp.
arbitrary) graph, and every player has constantly many strategies. We note that
the complete graph case is reminiscent of perturbing all parameters, a common
assumption in most known smoothed analysis results.
Second, we define a notion of smoothness-preserving reduction among search
problems, and obtain reductions from -strategy network coordination games to
local-max-cut, and from -strategy games (with arbitrary ) to
local-max-cut up to two flips. The former together with the recent result of
[BCC18] gives an alternate -time smoothed algorithm for the
-strategy case. This notion of reduction allows for the extension of
smoothed efficient algorithms from one problem to another.
For the first set of results, we develop techniques to bound the probability
that an (adversarial) better-response sequence makes slow improvements on the
potential. Our approach combines and generalizes the local-max-cut approaches
of [ER14,ABPW17] to handle the multi-strategy case: it requires a careful
definition of the matrix which captures the increase in potential, a tighter
union bound on adversarial sequences, and balancing it with good enough rank
bounds. We believe that the approach and notions developed herein could be of
interest in addressing the smoothed complexity of other potential and/or
congestion games
Moment-Matching Polynomials
We give a new framework for proving the existence of low-degree, polynomial
approximators for Boolean functions with respect to broad classes of
non-product distributions. Our proofs use techniques related to the classical
moment problem and deviate significantly from known Fourier-based methods,
which require the underlying distribution to have some product structure.
Our main application is the first polynomial-time algorithm for agnostically
learning any function of a constant number of halfspaces with respect to any
log-concave distribution (for any constant accuracy parameter). This result was
not known even for the case of learning the intersection of two halfspaces
without noise. Additionally, we show that in the "smoothed-analysis" setting,
the above results hold with respect to distributions that have sub-exponential
tails, a property satisfied by many natural and well-studied distributions in
machine learning.
Given that our algorithms can be implemented using Support Vector Machines
(SVMs) with a polynomial kernel, these results give a rigorous theoretical
explanation as to why many kernel methods work so well in practice
Smoothed Analysis in Unsupervised Learning via Decoupling
Smoothed analysis is a powerful paradigm in overcoming worst-case
intractability in unsupervised learning and high-dimensional data analysis.
While polynomial time smoothed analysis guarantees have been obtained for
worst-case intractable problems like tensor decompositions and learning
mixtures of Gaussians, such guarantees have been hard to obtain for several
other important problems in unsupervised learning. A core technical challenge
in analyzing algorithms is obtaining lower bounds on the least singular value
for random matrix ensembles with dependent entries, that are given by
low-degree polynomials of a few base underlying random variables.
In this work, we address this challenge by obtaining high-confidence lower
bounds on the least singular value of new classes of structured random matrix
ensembles of the above kind. We then use these bounds to design algorithms with
polynomial time smoothed analysis guarantees for the following three important
problems in unsupervised learning:
1. Robust subspace recovery, when the fraction of inliers in the
d-dimensional subspace is at least for any constant integer . This contrasts with the known
worst-case intractability when , and the previous smoothed
analysis result which needed (Hardt and Moitra, 2013).
2. Learning overcomplete hidden markov models, where the size of the state
space is any polynomial in the dimension of the observations. This gives the
first polynomial time guarantees for learning overcomplete HMMs in a smoothed
analysis model.
3. Higher order tensor decompositions, where we generalize the so-called
FOOBI algorithm of Cardoso to find order- rank-one tensors in a subspace.
This allows us to obtain polynomially robust decomposition algorithms for
'th order tensors with rank .Comment: 44 page
- …