11 research outputs found
Alternation-Trading Proofs, Linear Programming, and Lower Bounds
A fertile area of recent research has demonstrated concrete polynomial time
lower bounds for solving natural hard problems on restricted computational
models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path,
Mod6-SAT, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs
of these lower bounds follow a certain proof-by-contradiction strategy that we
call alternation-trading. An important open problem is to determine how
powerful such proofs can possibly be.
We propose a methodology for studying these proofs that makes them amenable
to both formal analysis and automated theorem proving. We prove that the search
for better lower bounds can often be turned into a problem of solving a large
series of linear programming instances. Implementing a small-scale theorem
prover based on this result, we extract new human-readable time lower bounds
for several problems. This framework can also be used to prove concrete
limitations on the current techniques.Comment: To appear in STACS 2010, 12 page
Arithmetic circuits: the chasm at depth four gets wider
In their paper on the "chasm at depth four", Agrawal and Vinay have shown
that polynomials in m variables of degree O(m) which admit arithmetic circuits
of size 2^o(m) also admit arithmetic circuits of depth four and size 2^o(m).
This theorem shows that for problems such as arithmetic circuit lower bounds or
black-box derandomization of identity testing, the case of depth four circuits
is in a certain sense the general case. In this paper we show that smaller
depth four circuits can be obtained if we start from polynomial size arithmetic
circuits. For instance, we show that if the permanent of n*n matrices has
circuits of size polynomial in n, then it also has depth 4 circuits of size
n^O(sqrt(n)*log(n)). Our depth four circuits use integer constants of
polynomial size. These results have potential applications to lower bounds and
deterministic identity testing, in particular for sums of products of sparse
univariate polynomials. We also give an application to boolean circuit
complexity, and a simple (but suboptimal) reduction to polylogarithmic depth
for arithmetic circuits of polynomial size and polynomially bounded degree
On the P vs NP question: a proof on inequality
The analysis discussed in this paper is based on a wellâknown NPâcomplete problem which is called âsatisfiability problem or SATâ. From SAT a new NPâcomplete problem is derived, which is described by a Boolean function called âcore functionâ. In this paper it is proved that the cost of the minimal implementation of core function increases with n exponentially. Since the synthesis of core function is an NPâcomplete problem, this result is equivalent to proving that P and NP do not coincide
Easiness Amplification and Uniform Circuit Lower Bounds
We present new consequences of the assumption that time-bounded algorithms can be "compressed" with non-uniform circuits. Our main contribution is an "easiness amplification" lemma for circuits. One instantiation of the lemma says: if n^{1+e}-time, tilde{O}(n)-space computations have n^{1+o(1)} size (non-uniform) circuits for some e > 0, then every problem solvable in polynomial time and tilde{O}(n) space has n^{1+o(1)} size (non-uniform) circuits as well. This amplification has several consequences:
* An easy problem without small LOGSPACE-uniform circuits. For all e > 0, we give a natural decision problem, General Circuit n^e-Composition, that is solvable in about n^{1+e} time, but we prove that polynomial-time and logarithmic-space preprocessing cannot produce n^{1+o(1)}-size circuits for the problem. This shows that there are problems solvable in n^{1+e} time which are not in LOGSPACE-uniform n^{1+o(1)} size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false.
* Problems without low-depth LOGSPACE-uniform circuits. For all e > 0, 1 < d < 2, and e < d we give another natural circuit composition problem computable in tilde{O}(n^{1+e}) time, or in O((log n)^d) space (though not necessarily simultaneously) that we prove does not have SPACE[(log n)^e]-uniform circuits of tilde{O}(n) size and O((log n)^e) depth. We also show SAT does not have circuits of tilde{O}(n) size and log^{2-o(1)}(n) depth that can be constructed in log^{2-o(1)}(n) space.
* A strong circuit complexity amplification. For every e > 0, we give a natural circuit composition problem and show that if it has tilde{O}(n)-size circuits (uniform or not), then every problem solvable in 2^{O(n)} time and 2^{O(sqrt{n log n})} space (simultaneously) has 2^{O(sqrt{n log n})}-size circuits (uniform or not). We also show the same consequence holds assuming SAT has tilde{O}(n)-size circuits. As a corollary, if n^{1.1} time computations (or O(n) nondeterministic time computations) have tilde{O}(n)-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems
A Quantum Time-Space Lower Bound for the Counting Hierarchy
We obtain the first nontrivial time-space lower bound for quantum algorithms
solving problems related to satisfiability. Our bound applies to MajSAT and
MajMajSAT, which are complete problems for the first and second levels of the
counting hierarchy, respectively. We prove that for every real d and every
positive real epsilon there exists a real c>1 such that either: MajMajSAT does
not have a quantum algorithm with bounded two-sided error that runs in time
n^c, or MajSAT does not have a quantum algorithm with bounded two-sided error
that runs in time n^d and space n^{1-\epsilon}. In particular, MajMajSAT cannot
be solved by a quantum algorithm with bounded two-sided error running in time
n^{1+o(1)} and space n^{1-\epsilon} for any epsilon>0. The key technical
novelty is a time- and space-efficient simulation of quantum computations with
intermediate measurements by probabilistic machines with unbounded error. We
also develop a model that is particularly suitable for the study of general
quantum computations with simultaneous time and space bounds. However, our
arguments hold for any reasonable uniform model of quantum computation.Comment: 25 page
A Polynomial-Time Deterministic Algorithm for An NP-Complete Problem
An NP-complete graph decision problem, the "Multi-stage graph Simple Path"
(abbr. MSP) problem, is introduced. The main contribution of this paper is a
poly-time algorithm named the ZH algorithm for the problem together with the
proof of its correctness, which implies NP=P. (1) A crucial structural property
is discovered, whereby all MSP instances are arranged into the sequence
,,,... ( essentially stands for a group of graphs
for all ). For each in the sequence, there is a graph
"mathematically homomorphic" to which keeps
completely accordant with on the existence of global solutions. This
naturally provides a chance of applying mathematical induction for the proof of
an algorithm. In previous attempts, algorithms used for making global decisions
were mostly guided by heuristics and intuition. Rather, the ZH algorithm is
dedicatedly designed to comply with the proposed proving framework of
mathematical induction. (2) Although the ZH algorithm deals with paths, it
always regards paths as a collection of edge sets. This is the key to the
avoidance of exponential complexity. (3) Any poly-time algorithm that seeks
global information can barely avoid the error caused by localized computation.
In the ZH algorithm, the proposed reachable-path edge-set and the
computed information for it carry all necessary contextual information, which
can be utilized to summarize the "history" and to detect the "future" for
searching global solutions. (4) The relation between local strategies and
global strategies is discovered and established, wherein preceding decisions
can pose constraints to subsequent decisions (and vice versa). This interplay
resembles the paradigm of dynamic programming, while much more convoluted.
Nevertheless, the computation is always strait forward and decreases
monotonically.Comment: 61 pages,14 figure
On lower bounds for circuit complexity and algorithms for satisfiability
This work is devoted to explore the novel method of proving circuit lower bounds for the class NEXP by Ryan Williams. Williams is able to show two circuit lower bounds: A conditional lower bound which says that NEXP does not have polynomial size circuits if there exists better-than-trivial algorithms for CIRCUIT SAT and an inconditional lower bound which says that NEXP does not have polynomial size circuits of the class ACC^0. We put special emphasis on the first result by exposing, in as much as of a self-contained manner as possible, all the results from complexity theory that Williams use in his proof. In particular, the focus is put in an efficient reduction from non-deterministic computations to satisfiability of Boolean formulas. The second result is also studied, although not as thoroughly, and some pointers with regards to the relationship of Williams' method and the known complexity theory barriers are given