1,125 research outputs found

    On the robustness of bucket brigade quantum RAM

    Get PDF
    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti, Lloyd, and Maccone [Phys. Rev. Lett. 100, 160501 (2008)]. Due to a result of Regev and Schiff [ICALP '08 pp. 773], we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o(2−n/2)o(2^{-n/2}) (where N=2nN=2^n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion [Phys. Rev. Lett. 103, 150502 (2009)] or quantum machine learning [Phys. Rev. Lett. 113, 130503 (2014)] that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of "active" gates, since all components have to be actively error corrected.Comment: Replaced with the published version. 13 pages, 9 figure

    Perfect zero knowledge for quantum multiprover interactive proofs

    Full text link
    In this work we consider the interplay between multiprover interactive proofs, quantum entanglement, and zero knowledge proofs - notions that are central pillars of complexity theory, quantum information and cryptography. In particular, we study the relationship between the complexity class MIP∗^*, the set of languages decidable by multiprover interactive proofs with quantumly entangled provers, and the class PZKMIP∗^*, which is the set of languages decidable by MIP∗^* protocols that furthermore possess the perfect zero knowledge property. Our main result is that the two classes are equal, i.e., MIP∗=^* = PZKMIP∗^*. This result provides a quantum analogue of the celebrated result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC 1988) who show that MIP == PZKMIP (in other words, all classical multiprover interactive protocols can be made zero knowledge). We prove our result by showing that every MIP∗^* protocol can be efficiently transformed into an equivalent zero knowledge MIP∗^* protocol in a manner that preserves the completeness-soundness gap. Combining our transformation with previous results by Slofstra (Forum of Mathematics, Pi 2019) and Fitzsimons, Ji, Vidick and Yuen (STOC 2019), we obtain the corollary that all co-recursively enumerable languages (which include undecidable problems as well as all decidable problems) have zero knowledge MIP∗^* protocols with vanishing promise gap

    Coverage and Vacuity in Network Formation Games

    Get PDF
    The frameworks of coverage and vacuity in formal verification analyze the effect of mutations applied to systems or their specifications. We adopt these notions to network formation games, analyzing the effect of a change in the cost of a resource. We consider two measures to be affected: the cost of the Social Optimum and extremums of costs of Nash Equilibria. Our results offer a formal framework to the effect of mutations in network formation games and include a complexity analysis of related decision problems. They also tighten the relation between algorithmic game theory and formal verification, suggesting refined definitions of coverage and vacuity for the latter

    Hardness of decoding quantum stabilizer codes

    Full text link
    In this article we address the computational hardness of optimally decoding a quantum stabilizer code. Much like classical linear codes, errors are detected by measuring certain check operators which yield an error syndrome, and the decoding problem consists of determining the most likely recovery given the syndrome. The corresponding classical problem is known to be NP-complete, and a similar decoding problem for quantum codes is also known to be NP-complete. However, this decoding strategy is not optimal in the quantum setting as it does not take into account error degeneracy, which causes distinct errors to have the same effect on the code. Here, we show that optimal decoding of stabilizer codes is computationally much harder than optimal decoding of classical linear codes, it is #P

    Minimizing finite automata is computationally hard

    Get PDF
    It is known that deterministic finite automata (DFAs) can be algorithmically minimized, i.e., a DFA M can be converted to an equivalent DFA M' which has a minimal number of states. The minimization can be done efficiently [6]. On the other hand, it is known that unambiguous finite automata (UFAs) and nondeterministic finite automata (NFAs) can be algorithmically minimized too, but their minimization problems turn out to be NP-complete and PSPACE-complete [8]. In this paper, the time complexity of the minimization problem for two restricted types of finite automata is investigated. These automata are nearly deterministic, since they only allow a small amount of non determinism to be used. On the one hand, NFAs with a fixed finite branching are studied, i.e., the number of nondeterministic moves within every accepting computation is bounded by a fixed finite number. On the other hand, finite automata are investigated which are essentially deterministic except that there is a fixed number of different initial states which can be chosen nondeterministically. The main result is that the minimization problems for these models are computationally hard, namely NP-complete. Hence, even the slightest extension of the deterministic model towards a nondeterministic one, e.g., allowing at most one nondeterministic move in every accepting computation or allowing two initial states instead of one, results in computationally intractable minimization problems

    A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems

    Get PDF
    This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version
    • …
    corecore