3,500 research outputs found
Algorithms and lower bounds for de Morgan formulas of low-communication leaf gates
The class consists of Boolean functions
computable by size- de Morgan formulas whose leaves are any Boolean
functions from a class . We give lower bounds and (SAT, Learning,
and PRG) algorithms for , for classes
of functions with low communication complexity. Let
be the maximum -party NOF randomized communication
complexity of . We show:
(1) The Generalized Inner Product function cannot be computed in
on more than fraction of inputs
for As a corollary, we get an average-case lower bound for
against .
(2) There is a PRG of seed length that -fools . For
, we get the better seed length . This gives the first
non-trivial PRG (with seed length ) for intersections of half-spaces
in the regime where .
(3) There is a randomized -time SAT algorithm for , where In particular, this implies a nontrivial
#SAT algorithm for .
(4) The Minimum Circuit Size Problem is not in .
On the algorithmic side, we show that can be
PAC-learned in time
An average-case depth hierarchy theorem for Boolean circuits
We prove an average-case depth hierarchy theorem for Boolean circuits over
the standard basis of , , and gates.
Our hierarchy theorem says that for every , there is an explicit
-variable Boolean function , computed by a linear-size depth- formula,
which is such that any depth- circuit that agrees with on fraction of all inputs must have size This
answers an open question posed by H{\aa}stad in his Ph.D. thesis.
Our average-case depth hierarchy theorem implies that the polynomial
hierarchy is infinite relative to a random oracle with probability 1,
confirming a conjecture of H{\aa}stad, Cai, and Babai. We also use our result
to show that there is no "approximate converse" to the results of Linial,
Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus
answering a question posed by O'Donnell, Kalai, and Hatami.
A key ingredient in our proof is a notion of \emph{random projections} which
generalize random restrictions
NP-hardness of circuit minimization for multi-output functions
Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive.
In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators.
Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions
Learning circuits with few negations
Monotone Boolean functions, and the monotone Boolean circuits that compute
them, have been intensively studied in complexity theory. In this paper we
study the structure of Boolean functions in terms of the minimum number of
negations in any circuit computing them, a complexity measure that interpolates
between monotone functions and the class of all functions. We study this
generalization of monotonicity from the vantage point of learning theory,
giving near-matching upper and lower bounds on the uniform-distribution
learnability of circuits in terms of the number of negations they contain. Our
upper bounds are based on a new structural characterization of negation-limited
circuits that extends a classical result of A. A. Markov. Our lower bounds,
which employ Fourier-analytic tools from hardness amplification, give new
results even for circuits with no negations (i.e. monotone functions)
A Nearly Optimal Lower Bound on the Approximate Degree of AC
The approximate degree of a Boolean function is the least degree of a real polynomial that
approximates pointwise to error at most . We introduce a generic
method for increasing the approximate degree of a given function, while
preserving its computability by constant-depth circuits.
Specifically, we show how to transform any Boolean function with
approximate degree into a function on variables with approximate degree at least . In particular, if , then
is polynomially larger than . Moreover, if is computed by a
polynomial-size Boolean circuit of constant depth, then so is .
By recursively applying our transformation, for any constant we
exhibit an AC function of approximate degree . This
improves over the best previous lower bound of due to
Aaronson and Shi (J. ACM 2004), and nearly matches the trivial upper bound of
that holds for any function. Our lower bounds also apply to
(quasipolynomial-size) DNFs of polylogarithmic width.
We describe several applications of these results. We give:
* For any constant , an lower bound on the
quantum communication complexity of a function in AC.
* A Boolean function with approximate degree at least ,
where is the certificate complexity of . This separation is optimal
up to the term in the exponent.
* Improved secret sharing schemes with reconstruction procedures in AC.Comment: 40 pages, 1 figur
Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review
The paper characterizes classes of functions for which deep learning can be
exponentially better than shallow learning. Deep convolutional networks are a
special case of these conditions, though weight sharing is not the main reason
for their exponential advantage
- …