4 research outputs found

    Arithmetic circuits: the chasm at depth four gets wider

    Get PDF
    In their paper on the "chasm at depth four", Agrawal and Vinay have shown that polynomials in m variables of degree O(m) which admit arithmetic circuits of size 2^o(m) also admit arithmetic circuits of depth four and size 2^o(m). This theorem shows that for problems such as arithmetic circuit lower bounds or black-box derandomization of identity testing, the case of depth four circuits is in a certain sense the general case. In this paper we show that smaller depth four circuits can be obtained if we start from polynomial size arithmetic circuits. For instance, we show that if the permanent of n*n matrices has circuits of size polynomial in n, then it also has depth 4 circuits of size n^O(sqrt(n)*log(n)). Our depth four circuits use integer constants of polynomial size. These results have potential applications to lower bounds and deterministic identity testing, in particular for sums of products of sparse univariate polynomials. We also give an application to boolean circuit complexity, and a simple (but suboptimal) reduction to polylogarithmic depth for arithmetic circuits of polynomial size and polynomially bounded degree

    Typically-Correct Derandomization for Small Time and Space

    Get PDF
    Suppose a language L can be decided by a bounded-error randomized algorithm that runs in space S and time n * poly(S). We give a randomized algorithm for L that still runs in space O(S) and time n * poly(S) that uses only O(S) random bits; our algorithm has a low failure probability on all but a negligible fraction of inputs of each length. As an immediate corollary, there is a deterministic algorithm for L that runs in space O(S) and succeeds on all but a negligible fraction of inputs of each length. We also give several other complexity-theoretic applications of our technique

    Randomness in completeness and space-bounded computations

    Get PDF
    The study of computational complexity investigates the role of various computational resources such as processing time, memory requirements, nondeterminism, randomness, nonuniformity, etc. to solve different types of computational problems. In this dissertation, we study the role of randomness in two fundamental areas of computational complexity: NP-completeness and space-bounded computations. The concept of completeness plays an important role in defining the notion of \u27hard\u27 problems in Computer Science. Intuitively, an NP-complete problem captures the difficulty of solving any problem in NP. Polynomial-time reductions are at the heart of defining completeness. However, there is no single notion of reduction; researchers identified various polynomial-time reductions such as many-one reduction, truth-table reduction, Turing reduction, etc. Each such notion of reduction induces a notion of completeness. Finding the relationships among various NP-completeness notions is a significant open problem. Our first result is about the separation of two such polynomial-time completeness notions for NP, namely, Turing completeness and many-one completeness. This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis. Our next result involves a conjecture by Even, Selman, and Yacobi [ESY84,SY82] which states that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. If true, this conjecture implies that a certain kind of probabilistic public-key cryptosystems is not secure. The conjecture is open for 30 years. We provide evidence in support of a variant of this conjecture. We show that if there exist certain secure one-way functions, then the ESY conjecture for the bounded-truth-table reduction holds. Now we turn our attention to space-bounded computations. We investigate probabilistic space-bounded machines that are allowed to access their random bits {\em multiple times}. Our main conceptual contribution here is to establish an interesting connection between derandomization of such probabilistic space-bounded machines and the derandomization of probabilistic time-bounded machines. In particular, we show that if we can derandomize a multipass machine even with a small number of passes over random tape and only O(log^2 n) random bits to deterministic polynomial-time, then BPTIME(n) ⊆ DTIME(2^{o(n)}). Note that if we restrict the number of random bits to O(log n), then we can trivially derandomize the machine to polynomial time. Furthermore, it can be shown that if we restrict the number of passes to O(1), we can still derandomize the machine to polynomial time. Thus our result implies that any extension beyond these trivialities will lead to an unknown derandomization of BPTIME(n). Our final contribution is about the derandomization of probabilistic time-bounded machines under branching program lower bounds. The standard method of derandomizing time-bounded probabilistic machines depends on various circuit lower bounds, which are notoriously hard to prove. We show that the derandomization of low-degree polynomial identity testing, a well-known problem in co-RP, can be obtained under certain branching program lower bounds. Note that branching programs are considered weaker model of computation than the Boolean circuits

    Delegating computation reliably : paradigms and constructions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/CÂč-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D
    corecore