10 research outputs found

    Short random circuits define good quantum error correcting codes

    Full text link
    We study the encoding complexity for quantum error correcting codes with large rate and distance. We prove that random Clifford circuits with O(nlog2n)O(n \log^2 n) gates can be used to encode kk qubits in nn qubits with a distance dd provided kn<1dnlog23h(dn)\frac{k}{n} < 1 - \frac{d}{n} \log_2 3 - h(\frac{d}{n}). In addition, we prove that such circuits typically have a depth of O(log3n)O( \log^3 n).Comment: 5 page

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings

    Tight Bounds on Computing Error-Correcting Codes by Bounded-Depth Circuits with Arbitrary Gates

    No full text
    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C: {0, 1} Ω(n) → {0, 1} n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d = 2 then w = Θ(n(lg n / lg lg n) 2). (2) If d = 3 then w = Θ(n lg lg n). (3) If d = 2k or d = 2k + 1 for some integer k ≥ 2 then w = Θ(nλk(n)), where λ1(n) = ⌈lg n⌉, λi+1(n) = λ ∗ i (n), and the ∗ operation gives how many times one has to iterate the function λi to reach a value at most 1 from the argument n. (4) If d = lg ∗ n then w = O(n). For depth d = 2, our Ω(n(lg n / lg lg n) 2) lower bound gives the largest known lower bound for computing any linear map. Using a result by Ishai, Kushilevitz, Ostrovsky, and Sahai [17], we also obtain similar bounds for computing pairwise-independent hash functions. Our lower bounds are based on a superconcentrator-like condition that the graphs of circuits computing good codes must satisfy. This condition is provably intermediate between superconcentrators and their weakenings considered before

    Scrambling speed of random quantum circuits

    Full text link
    Random transformations are typically good at "scrambling" information. Specifically, in the quantum setting, scrambling usually refers to the process of mapping most initial pure product states under a unitary transformation to states which are macroscopically entangled, in the sense of being close to completely mixed on most subsystems containing a fraction fn of all n particles for some constant f. While the term scrambling is used in the context of the black hole information paradox, scrambling is related to problems involving decoupling in general, and to the question of how large isolated many-body systems reach local thermal equilibrium under their own unitary dynamics. Here, we study the speed at which various notions of scrambling/decoupling occur in a simplified but natural model of random two-particle interactions: random quantum circuits. For a circuit representing the dynamics generated by a local Hamiltonian, the depth of the circuit corresponds to time. Thus, we consider the depth of these circuits and we are typically interested in what can be done in a depth that is sublinear or even logarithmic in the size of the system. We resolve an outstanding conjecture raised in the context of the black hole information paradox with respect to the depth at which a typical quantum circuit generates an entanglement assisted encoding against the erasure channel. In addition, we prove that typical quantum circuits of poly(log n) depth satisfy a stronger notion of scrambling and can be used to encode alpha n qubits into n qubits so that up to beta n errors can be corrected, for some constants alpha, beta > 0.Comment: 24 pages, 2 figures. Superseded by http://arxiv.org/abs/1307.063

    Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators

    Get PDF

    The complexity of joint computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 253-266).Joint computation is the ubiquitous scenario in which a computer is presented with not one, but many computational tasks to perform. A fundamental question arises: when can we cleverly combine computations, to perform them with greater efficiency or reliability than by tackling them separately? This thesis investigates the power and, especially, the limits of efficient joint computation, in several computational models: query algorithms, circuits, and Turing machines. We significantly improve and extend past results on limits to efficient joint computation for multiple independent tasks; identify barriers to progress towards better circuit lower bounds for multiple-output operators; and begin an original line of inquiry into the complexity of joint computation. In more detail, we make contributions in the following areas: Improved direct product theorems for randomized query complexity: The "direct product problem" seeks to understand how the difficulty of computing a function on each of k independent inputs scales with k. We prove the following direct product theorem (DPT) for query complexity: if every T-query algorithm has success probability at most 1-[epsilon] in computing the Boolean function f on input distribution [mu], then for [alpha] 0, the worst-case success probability of any [alpha]R₂(f)k-query randomized algorithm for f k falls exponentially with k. The best previous statement of this type, due to Klauck, Spalek, and de Wolf, required a query bound of O(bs(f)k). Our proof technique involves defining and analyzing a collection of martingales associated with an algorithm attempting to solve f*k. Our method is quite general and yields a new XOR lemma and threshold DPT for the query model, as well as DPTs for the query complexity of learning tasks, search problems, and tasks involving interaction with dynamic entities. We also give a version of our DPT in which decision tree size is the resource of interest. Joint complexity in the Decision Tree Model: We study the diversity of possible behaviors of the joint computational complexity of a collection f1,... , fk of Boolean functions over a shared input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model, we prove a result to the effect that the "obvious" constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a "mystery bin," which we construct using a polynomial separation between deterministic and unambiguous query complexity shown by Savický. We also pose a conjecture in the communication model which, if proved, would extend our result to that model. Limitations of Lower-Bound Methods for the Wire Complexity of Boolean Operators: We study the circuit complexity of Boolean operators, i.e., collections of Boolean functions defined over a common input. Our focus is the well-studied model in which arbitrary Boolean functions are allowed as gates, and in which a circuit's complexity is measured by its depth and number of wires. We show sharp limitations of several existing lower-bound methods for this model. First, we study an information-theoretic lower-bound method due to Cherukhin, which gave the first improvement over the lower bounds provided by the well-known superconcentrator technique for constant depths. (The lower bounds are still barelysuperlinear, however) Cherukhin's method was formalized by Jukna as a general lower-bound criterion for Boolean operators, the "Strong Multiscale Entropy" (SME) property. It seemed plausible that this property could imply significantly better lower bounds by an improved analysis. However, we show that this is not the case, by exhibiting an explicit operator with the SME property that is computable in constant depths whose wire-complexity essentially matches the Cherukhin-Jukna lower bound (to within a constant multiplicative factor, for depths d = 2,3 and for even depths d >/= 6). Next, we show limitations of two simpler lower-bound criteria given by Jukna: the "entropy method" for general operators, and the "pairwise-distance method" for linear operators. We show that neither method gives super-linear lower bounds for depth 3. In the process, we obtain the first known polynomial separation between the depth-2 and depth-3 wire complexities for an explicit operator. We also continue the study (initiated by Jukna) of the complexity of "representing" a linear operator by bounded-depth circuits, a weaker notion than computing the operator. New limits to classical and quantum instance compression: Given an instance of a decision problem that is too difficult to solve outright, we may aim for the more limited goal of compressing that instance into a smaller, equivalent instance of the same or a different problem. As a representative problem, say we are given Boolean formulas [psi]1,... ,[psi]t, each of length n << t, and we want to determine if at least one [psi]j is satisfiable. Can we efficiently reduce this "OR-SAT" question to an equivalent problem instance (of SAT or another problem) of size poly(n), independent of t? We call any such reduction a "strong compression" reduction for OR-SAT. This would amount to a major gain from compressing [psi]1,. .. , [psi]t jointly, since we know of no way to reliably compress an individual SAT instance. Harnik and Naor (FOCS '06/SICOMP '10) and Bodlaender, Downey, Fellows, and Hermelin (ICALP '08/JCSS '09) showed that the infeasibility of strong compression for OR-SAT would also imply limits to instance compression schemes for a large number of other, natural problems; this is significant because instance compression is a central technique in the design of so-called fixed-parameter tractable algorithms. Bodlaender et al. also showed that the infeasibility of strong compression for the analogous "AND-SAT" problem would establish limits to instance compression for another family of problems. Fortnow and Santhanam (STOC '08) showed that deterministic (or 1-sided error randomized) strong compression for OR-SAT is not possible unless NP C coNP/poly; the case of AND-SAT remained mysterious. We give new and improved evidence against strong compression schemes for both OR-SAT and AND-SAT; our method applies to probabilistic compression schemes with 2-sided error. We also give versions of these results for an analogous task of quantum instance compression, in which a polynomial-time quantum reduction must output a quantum state that, in an appropriate sense, "preserves the answer" to the input instance. We give quantitatively similar evidence against strong compression for AND- and OR-SAT in this setting, albeit under less well-studied hypotheses about the relationship between NP and quantum complexity classes. To prove all of these results, we exploit the information bottleneck of an instance compression scheme, using a new method to "disguise" information being fed into a compressive mapping.by Andrew Donald Drucker.Ph.D
    corecore