219 research outputs found

    Hardness Against Linear Branching Programs and More

    Get PDF

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    The Space Complexity of Sampling

    Get PDF

    A Constant-Space Sequential Model of Computation for First-Order Logic

    Get PDF
    AbstractWe define and justify a natural sequential model of computation with a constant amount of read/write work space, despite unlimited (polynomial) access to read-only input and write-only output. The model is deterministic, uniform, and sequential. The constant work space is modeled by a finite number of destructively read boolean variables, assignable by formulas over the canonical boolean operations. We show that computation on this model is equivalent to expressibility in first-order logic, giving a duality between (read-once) constant-space serial algorithms and constant-time parallel algorithms

    Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 207-214).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of ACâ‚€ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.by Louay M.J. Bazzi.Ph.D

    Restricted branching programs and hardware verification

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.Includes bibliographical references (p. 71-77).by Stephen John Ponzio.Ph.D

    Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function

    Get PDF
    In this work we study the relationship between size and treewidth of circuits computing variants of the element distinctness function. First, we show that for each n, any circuit of treewidth t computing the element distinctness function delta_n:{0,1}^n -> {0,1} must have size at least Omega((n^2)/(2^{O(t)}*log(n))). This result provides a non-trivial generalization of a super-linear lower bound for the size of Boolean formulas (treewidth 1) due to Neciporuk. Subsequently, we turn our attention to read-once circuits, which are circuits where each variable labels at most one input vertex. For each n, we show that any read-once circuit of treewidth t and size s computing a variant tau_n:{0,1}^n -> {0,1} of the element distinctness function must satisfy the inequality t * log(s) >= Omega(n/log(n)). Using this inequality in conjunction with known results in structural graph theory, we show that for each fixed graph H, read-once circuits computing tau_n which exclude H as a minor must have size at least Omega(n^2/log^{4}(n)). For certain well studied functions, such as the triangle-freeness function, this last lower bound can be improved to Omega(n^2/log^2(n))

    On Lower Bounds for Parity Branching Programs

    Get PDF
    Diese Arbeit beschaeftigt sich mit der Komplexität von parity Branching Programmen. Es werden superpolynomiale untere Schranken für verschiedene Varianten bewiesen, nämlich für well-structured graph-driven parity branching programs, general graph-driven parity branching programs und Summen von graph-driven parity branching programs
    • …
    corecore