2,084 research outputs found
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Average-Case Complexity of Shellsort
We prove a general lower bound on the average-case complexity of Shellsort:
the average number of data-movements (and comparisons) made by a -pass
Shellsort for any incremental sequence is \Omega (pn^{1 + 1/p) for all . Using similar arguments, we analyze the average-case complexity
of several other sorting algorithms.Comment: 11 pages. Submitted to ICALP'9
Subsampling Mathematical Relaxations and Average-case Complexity
We initiate a study of when the value of mathematical relaxations such as
linear and semidefinite programs for constraint satisfaction problems (CSPs) is
approximately preserved when restricting the instance to a sub-instance induced
by a small random subsample of the variables. Let be a family of CSPs such
as 3SAT, Max-Cut, etc., and let be a relaxation for , in the sense
that for every instance , is an upper bound the maximum
fraction of satisfiable constraints of . Loosely speaking, we say that
subsampling holds for and if for every sufficiently dense instance and every , if we let be the instance obtained by
restricting to a sufficiently large constant number of variables, then
. We say that weak subsampling holds if the
above guarantee is replaced with whenever
. We show: 1. Subsampling holds for the BasicLP and BasicSDP
programs. BasicSDP is a variant of the relaxation considered by Raghavendra
(2008), who showed it gives an optimal approximation factor for every CSP under
the unique games conjecture. BasicLP is the linear programming analog of
BasicSDP. 2. For tighter versions of BasicSDP obtained by adding additional
constraints from the Lasserre hierarchy, weak subsampling holds for CSPs of
unique games type. 3. There are non-unique CSPs for which even weak subsampling
fails for the above tighter semidefinite programs. Also there are unique CSPs
for which subsampling fails for the Sherali-Adams linear programming hierarchy.
As a corollary of our weak subsampling for strong semidefinite programs, we
obtain a polynomial-time algorithm to certify that random geometric graphs (of
the type considered by Feige and Schechtman, 2002) of max-cut value
have a cut value at most .Comment: Includes several more general results that subsume the previous
version of the paper
Average case complexity of linear multivariate problems
We study the average case complexity of a linear multivariate problem
(\lmp) defined on functions of variables. We consider two classes of
information. The first \lstd consists of function values and the second
\lall of all continuous linear functionals. Tractability of \lmp means that
the average case complexity is O((1/\e)^p) with independent of . We
prove that tractability of an \lmp in \lstd is equivalent to tractability
in \lall, although the proof is {\it not} constructive. We provide a simple
condition to check tractability in \lall. We also address the optimal design
problem for an \lmp by using a relation to the worst case setting. We find
the order of the average case complexity and optimal sample points for
multivariate function approximation. The theoretical results are illustrated
for the folded Wiener sheet measure.Comment: 7 page
Structural Average Case Complexity
AbstractLevin introduced an average-case complexity measure, based on a notion of âpolynomial on average,â and defined âaverage-case polynomial-time many-one reducibilityâ among randomized decision problems. We generalize his notions of average-case complexity classes, Random-NP and Average-P. Ben-Davidet al. use the notation of ăC, Fă to denote the set of randomized decision problems (L, Ό) such thatLis a set in C andÎŒis a probability density function in F. This paper introduces AverăC, Fă as the class of randomized decision problems (L, Ό) such thatLis computed by a type-C machine onÎŒ-average andÎŒis a density function in F. These notations capture all known average-case complexity classes as, for example, Random-NP= ăNP, P-compă and Average-P=AverăP, âă, where P-comp denotes the set of density functions whose distributions are computable in polynomial time, and â denotes the set of all density functions. Mainly studied are polynomial-time reductions between randomized decision problems: manyâone, deterministic Turing and nondeterministic Turing reductions and the average-case versions of them. Based on these reducibilities, structural properties of average-case complexity classes are discussed. We give average-case analogues of concepts in worst-case complexity theory; in particular, the polynomial time hierarchy and Turing self-reducibility, and we show that all known complete sets for Random-NP are Turing self-reducible. A new notion of âreal polynomial-time computationsâ is introduced based on average polynomial-time computations for arbitrary distributions from a fixed set, and it is used to characterize the worst-case complexity classesÎpkandÎŁpkof the polynomial-time hierarchy
From average case complexity to improper learning complexity
The basic problem in the PAC model of computational learning theory is to
determine which hypothesis classes are efficiently learnable. There is
presently a dearth of results showing hardness of learning problems. Moreover,
the existing lower bounds fall short of the best known algorithms.
The biggest challenge in proving complexity results is to establish hardness
of {\em improper learning} (a.k.a. representation independent learning).The
difficulty in proving lower bounds for improper learning is that the standard
reductions from -hard problems do not seem to apply in this
context. There is essentially only one known approach to proving lower bounds
on improper learning. It was initiated in (Kearns and Valiant 89) and relies on
cryptographic assumptions.
We introduce a new technique for proving hardness of improper learning, based
on reductions from problems that are hard on average. We put forward a (fairly
strong) generalization of Feige's assumption (Feige 02) about the complexity of
refuting random constraint satisfaction problems. Combining this assumption
with our new technique yields far reaching implications. In particular,
1. Learning 's is hard.
2. Agnostically learning halfspaces with a constant approximation ratio is
hard.
3. Learning an intersection of halfspaces is hard.Comment: 34 page
On average case complexity
The airn of this talk is to give an introduction to the notion of Levin's average
case complexity and then show sorne of the fields were recent research in this area
is focused on. The first part is motivated by the struggle to find a precise and
generally accepted definition of what is an efficient time on the average algorithm, given some distribution on the input. The definition should be easy (to use) and of course be machine independent and should posses properties like being closed under composition of algorithms. A reduction of a problem A to another problem which is
efficiently solvable on average should give an efficient on average procedure to solve A.
We then look in more detail at DistNP the class of problems in NP with polynomial
time distributions on the inputs. This class has complete and self-reducible problems
On the Average-case Complexity of Parameterized Clique
The k-Clique problem is a fundamental combinatorial problem that plays a
prominent role in classical as well as in parameterized complexity theory. It
is among the most well-known NP-complete and W[1]-complete problems. Moreover,
its average-case complexity analysis has created a long thread of research
already since the 1970s. Here, we continue this line of research by studying
the dependence of the average-case complexity of the k-Clique problem on the
parameter k. To this end, we define two natural parameterized analogs of
efficient average-case algorithms. We then show that k-Clique admits both
analogues for Erd\H{o}s-R\'{e}nyi random graphs of arbitrary density. We also
show that k-Clique is unlikely to admit neither of these analogs for some
specific computable input distribution
- âŠ