88 research outputs found
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
PCD
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Page 96 blank. Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-95).The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants. We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation's outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system's components. Each such proof attests that the message's data and all of its history comply with the prescribed properties. We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality - essentially, a signature card.by Alessandro Chiesa.M.Eng
State of the Art Report : Verified Computation
This report describes the state of the art in verifiable computation. The problem being solved is the following: The Verifiable Computation Problem (Verifiable Computing Problem) Suppose we have two computing agents. The first agent is the verifier, and the second agent is the prover. The verifier wants the prover to perform a computation. The verifier sends a description of the computation to the prover. Once the prover has completed the task, the prover returns the output to the verifier. The output will contain proof. The verifier can use this proof to check if the prover computed the output correctly. The check is not required to verify the algorithm used in the computation. Instead, it is a check that the prover computed the output using the computation specified by the verifier. The effort required for the check should be much less than that required to perform the computation. This state-of-the-art report surveys 128 papers from the literature comprising more than 4,000 pages. Other papers and books were surveyed but were omitted. The papers surveyed were overwhelmingly mathematical. We have summarised the major concepts that form the foundations for verifiable computation. The report contains two main sections. The first, larger section covers the theoretical foundations for probabilistically checkable and zero-knowledge proofs. The second section contains a description of the current practice in verifiable computation. Two further reports will cover (i) military applications of verifiable computation and (ii) a collection of technical demonstrators. The first of these is intended to be read by those who want to know what applications are enabled by the current state of the art in verifiable computation. The second is for those who want to see practical tools and conduct experiments themselves
Quantum Multi-Prover Interactive Proof Systems with Limited Prior Entanglement
This paper gives the first formal treatment of a quantum analogue of
multi-prover interactive proof systems. It is proved that the class of
languages having quantum multi-prover interactive proof systems is necessarily
contained in NEXP, under the assumption that provers are allowed to share at
most polynomially many prior-entangled qubits. This implies that, in
particular, if provers do not share any prior entanglement with each other, the
class of languages having quantum multi-prover interactive proof systems is
equal to NEXP. Related to these, it is shown that, in the case a prover does
not have his private qubits, the class of languages having quantum
single-prover interactive proof systems is also equal to NEXP.Comment: LaTeX2e, 19 pages, 2 figures, title changed, some of the sections are
fully revised, journal version in Journal of Computer and System Science
Guidable Local Hamiltonian Problems with Implications to Heuristic Ans\"atze State Preparation and the Quantum PCP Conjecture
We study 'Merlinized' versions of the recently defined Guided Local
Hamiltonian problem, which we call 'Guidable Local Hamiltonian' problems.
Unlike their guided counterparts, these problems do not have a guiding state
provided as a part of the input, but merely come with the promise that one
exists. We consider in particular two classes of guiding states: those that can
be prepared efficiently by a quantum circuit; and those belonging to a class of
quantum states we call classically evaluatable, for which it is possible to
efficiently compute expectation values of local observables classically. We
show that guidable local Hamiltonian problems for both classes of guiding
states are -complete in the inverse-polynomial precision
setting, but lie within (or ) in the constant
precision regime when the guiding state is classically evaluatable.
Our completeness results show that, from a complexity-theoretic perspective,
classical Ans\"atze selected by classical heuristics are just as powerful as
quantum Ans\"atze prepared by quantum heuristics, as long as one has access to
quantum phase estimation. In relation to the quantum PCP conjecture, we (i)
define a complexity class capturing quantum-classical probabilistically
checkable proof systems and show that it is contained in
for constant proof queries; (ii) give a no-go
result on 'dequantizing' the known quantum reduction which maps a
-verification circuit to a local Hamiltonian with constant
promise gap; (iii) give several no-go results for the existence of quantum gap
amplification procedures that preserve certain ground state properties; and
(iv) propose two conjectures that can be viewed as stronger versions of the
NLTS theorem. Finally, we show that many of our results can be directly
modified to obtain similar results for the class .Comment: 61 pages, 6 figure
Good approximate quantum LDPC codes from spacetime circuit Hamiltonians
We study approximate quantum low-density parity-check (QLDPC) codes, which are approximate quantum error-correcting codes specified as the ground space of a frustration-free local Hamiltonian, whose terms do not necessarily commute.
Such codes generalize stabilizer QLDPC codes, which are exact quantum error-correcting codes with sparse, low-weight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constant-weight stabilizers exist?
We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N) logical qubits into N physical qubits with distance d = Ω(N) and approximation infidelity ε = 1/(N). The code space is stabilized by a set of 10-local noncommuting projectors, with each physical qubit only participating in N projectors. We prove the existence of an efficient encoding map and show that the spectral gap of the code Hamiltonian scales as Ω(N^(−3.09)). We also show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth.
Our family of approximate QLDPC codes is based on applying a recent connection between circuit Hamiltonians and approximate quantum codes (Nirkhe, et al., ICALP 2018) to a result showing that random Clifford circuits of polylogarithmic depth yield asymptotically good quantum codes (Brown and Fawzi, ISIT 2013). Then, in order to obtain a code with sparse checks and strong detection of local errors, we use a spacetime circuit-to-Hamiltonian construction in order to take advantage of the parallelism of the Brown-Fawzi circuits. Because of this, we call our codes spacetime codes.
The analysis of the spectral gap of the code Hamiltonian is the main technical contribution of this work. We show that for any depth D quantum circuit on n qubits there is an associated spacetime circuit-to-Hamiltonian construction with spectral gap Ω(n^(−3.09)D⁻² log⁻⁶ (n)). To lower bound this gap we use a Markov chain decomposition method to divide the state space of partially completed circuit configurations into overlapping subsets corresponding to uniform circuit segments of depth logn, which are based on bitonic sorting circuits. We use the combinatorial properties of these circuit configurations to show rapid mixing between the subsets, and within the subsets we develop a novel isomorphism between the local update Markov chain on bitonic circuit configurations and the edge-flip Markov chain on equal-area dyadic tilings, whose mixing time was recently shown to be polynomial (Cannon, Levin, and Stauffer, RANDOM 2017). Previous lower bounds on the spectral gap of spacetime circuit Hamiltonians have all been based on a connection to exactly solvable quantum spin chains and applied only to 1+1 dimensional nearest-neighbor quantum circuits with at least linear depth
Making Quantum Local Verifiers Simulable with Potential Applications to Zero-Knowledge
Recently Chen and Movassagh proposed the quantum Merkle tree, which is a
quantum analogue of the well-known classical Merkle tree. It gives a succinct
verification protocol for quantum state commitment. Although they only proved
security against semi-honest provers, they conjectured its general security.
Using the proposed quantum Merkle tree, they gave a quantum analogue of
Kilian's succinct argument for NP, which is based on probabilistically
checkable proofs (PCPs). A nice feature of Kilian's argument is that it can be
extended to a zero-knowledge succinct argument for NP, if the underlying PCP is
zero-knowledge. Hence, a natural question is whether one can also make the
quantum succinct argument by Chen and Movassagh zero-knowledge as well.
This work makes progress on this problem. We generalize the recent result of
Broadbent and Grilo to show that any local quantum verifier can be made
simulable with a minor reduction in completeness and soundness. Roughly
speaking, a local quantum verifier is simulable if in the yes case, the local
views of the verifier can be computed without knowing the actual quantum proof;
it can be seen as the quantum analogue of the classical zero-knowledge PCPs.
Hence we conjecture that applying the proposed succinct quantum argument of
Chen and Movassagh to a simulable local verifier is indeed zero-knowledge
Verifying Quantitative Reliability of Programs That Execute on Unreliable Hardware
Emerging high-performance architectures are anticipated to contain unreliable components that may exhibit soft errors, which silently corrupt the results of computations. Full detection and recovery from soft errors is challenging, expensive, and, for some applications, unnecessary. For example, approximate computing applications (such as multimedia processing, machine learning, and big data analytics) can often naturally tolerate soft errors. In this paper we present Rely, a programming language that enables developers to reason about the quantitative reliability of an application -- namely, the probability that it produces the correct result when executed on unreliable hardware. Rely allows developers to specify the reliability requirements for each value that a function produces. We present a static quantitative reliability analysis that verifies quantitative requirements on the reliability of an application, enabling a developer to perform sound and verified reliability engineering. The analysis takes a Rely program with a reliability specification and a hardware specification, that characterizes the reliability of the underlying hardware components, and verifies that the program satisfies its reliability specification when executed on the underlying unreliable hardware platform. We demonstrate the application of quantitative reliability analysis on six computations implemented in Rely.This research was supported in part by the National Science Foundation (Grants CCF-0905244, CCF-1036241, CCF-1138967, CCF-1138967, and IIS-0835652), the United States Department of Energy (Grant DE-SC0008923), and DARPA (Grants FA8650-11-C-7192, FA8750-12-2-0110)
04421 Abstracts Collection -- Algebraic Methods in Computational Complexity
From 10.10.04 to 15.10.04, the Dagstuhl Seminar 04421
``Algebraic Methods in Computational Complexity\u27\u27
was held in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, several participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar as well as abstracts of
seminar results and ideas are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
Three Puzzles on Mathematics, Computation, and Games
In this lecture I will talk about three mathematical puzzles involving
mathematics and computation that have preoccupied me over the years. The first
puzzle is to understand the amazing success of the simplex algorithm for linear
programming. The second puzzle is about errors made when votes are counted
during elections. The third puzzle is: are quantum computers possible?Comment: ICM 2018 plenary lecture, Rio de Janeiro, 36 pages, 7 Figure
- …