1,261 research outputs found

    Lower Bounds for Monotone Counting Circuits

    Full text link
    A {+,x}-circuit counts a given multivariate polynomial f, if its values on 0-1 inputs are the same as those of f; on other inputs the circuit may output arbitrary values. Such a circuit counts the number of monomials of f evaluated to 1 by a given 0-1 input vector (with multiplicities given by their coefficients). A circuit decides ff if it has the same 0-1 roots as f. We first show that some multilinear polynomials can be exponentially easier to count than to compute them, and can be exponentially easier to decide than to count them. Then we give general lower bounds on the size of counting circuits.Comment: 20 page

    On the Complexity of Real Root Isolation

    Full text link
    We introduce a new approach to isolate the real roots of a square-free polynomial F=∑i=0nAixiF=\sum_{i=0}^n A_i x^i with real coefficients. It is assumed that each coefficient of FF can be approximated to any specified error bound. The presented method is exact, complete and deterministic. Due to its similarities to the Descartes method, we also consider it practical and easy to implement. Compared to previous approaches, our new method achieves a significantly better bit complexity. It is further shown that the hardness of isolating the real roots of FF is exclusively determined by the geometry of the roots and not by the complexity or the size of the coefficients. For the special case where FF has integer coefficients of maximal bitsize τ\tau, our bound on the bit complexity writes as O~(n3τ2)\tilde{O}(n^3\tau^2) which improves the best bounds known for existing practical algorithms by a factor of n=degFn=deg F. The crucial idea underlying the new approach is to run an approximate version of the Descartes method, where, in each subdivision step, we only consider approximations of the intermediate results to a certain precision. We give an upper bound on the maximal precision that is needed for isolating the roots of FF. For integer polynomials, this bound is by a factor nn lower than that of the precision needed when using exact arithmetic explaining the improved bound on the bit complexity

    Resolution over Linear Equations and Multilinear Proofs

    Get PDF
    We develop and study the complexity of propositional proof systems of varying strength extending resolution by allowing it to operate with disjunctions of linear equations instead of clauses. We demonstrate polynomial-size refutations for hard tautologies like the pigeonhole principle, Tseitin graph tautologies and the clique-coloring tautologies in these proof systems. Using the (monotone) interpolation by a communication game technique we establish an exponential-size lower bound on refutations in a certain, considerably strong, fragment of resolution over linear equations, as well as a general polynomial upper bound on (non-monotone) interpolants in this fragment. We then apply these results to extend and improve previous results on multilinear proofs (over fields of characteristic 0), as studied in [RazTzameret06]. Specifically, we show the following: 1. Proofs operating with depth-3 multilinear formulas polynomially simulate a certain, considerably strong, fragment of resolution over linear equations. 2. Proofs operating with depth-3 multilinear formulas admit polynomial-size refutations of the pigeonhole principle and Tseitin graph tautologies. The former improve over a previous result that established small multilinear proofs only for the \emph{functional} pigeonhole principle. The latter are different than previous proofs, and apply to multilinear proofs of Tseitin mod p graph tautologies over any field of characteristic 0. We conclude by connecting resolution over linear equations with extensions of the cutting planes proof system.Comment: 44 page

    Structure of computations in parallel complexity classes

    Get PDF
    Issued as Annual report, and Final project report, Project no. G-36-67

    Identity Testing and Lower Bounds for Read-k Oblivious Algebraic Branching Programs

    Get PDF
    Read-k oblivious algebraic branching programs are a natural generalization of the well-studied model of read-once oblivious algebraic branching program (ROABPs). In this work, we give an exponential lower bound of exp(n/k^{O(k)}) on the width of any read-k oblivious ABP computing some explicit multilinear polynomial f that is computed by a polynomial size depth-3 circuit. We also study the polynomial identity testing (PIT) problem for this model and obtain a white-box subexponential-time PIT algorithm. The algorithm runs in time 2^{~O(n^{1-1/2^{k-1}})} and needs white box access only to know the order in which the variables appear in the ABP

    Formulas vs. Circuits for Small Distance Connectivity

    Full text link
    We give the first super-polynomial separation in the power of bounded-depth boolean formulas vs. circuits. Specifically, we consider the problem Distance k(n)k(n) Connectivity, which asks whether two specified nodes in a graph of size nn are connected by a path of length at most k(n)k(n). This problem is solvable (by the recursive doubling technique) on {\bf circuits} of depth O(log⁥k)O(\log k) and size O(kn3)O(kn^3). In contrast, we show that solving this problem on {\bf formulas} of depth log⁥n/(log⁥log⁥n)O(1)\log n/(\log\log n)^{O(1)} requires size nΩ(log⁥k)n^{\Omega(\log k)} for all k(n)≀log⁥log⁥nk(n) \leq \log\log n. As corollaries: (i) It follows that polynomial-size circuits for Distance k(n)k(n) Connectivity require depth Ω(log⁥k)\Omega(\log k) for all k(n)≀log⁥log⁥nk(n) \leq \log\log n. This matches the upper bound from recursive doubling and improves a previous Ω(log⁥log⁥k)\Omega(\log\log k) lower bound of Beame, Pitassi and Impagliazzo [BIP98]. (ii) We get a tight lower bound of sΩ(d)s^{\Omega(d)} on the size required to simulate size-ss depth-dd circuits by depth-dd formulas for all s(n)=nO(1)s(n) = n^{O(1)} and d(n)≀log⁥log⁥log⁥nd(n) \leq \log\log\log n. No lower bound better than sΩ(1)s^{\Omega(1)} was previously known for any d(n)≰O(1)d(n) \nleq O(1). Our proof technique is centered on a new notion of pathset complexity, which roughly speaking measures the minimum cost of constructing a set of (partial) paths in a universe of size nn via the operations of union and relational join, subject to certain density constraints. Half of our proof shows that bounded-depth formulas solving Distance k(n)k(n) Connectivity imply upper bounds on pathset complexity. The other half is a combinatorial lower bound on pathset complexity

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio
    • 

    corecore