2,347 research outputs found

    Distributed Quantum Proofs for Replicated Data

    Get PDF
    This paper tackles the issue of checking that all copies of a large data set replicated at several nodes of a network are identical. The fact that the replicas may be located at distant nodes prevents the system from verifying their equality locally, i.e., by having each node consult only nodes in its vicinity. On the other hand, it remains possible to assign certificates to the nodes, so that verifying the consistency of the replicas can be achieved locally. However, we show that, as the replicated data is large, classical certification mechanisms, including distributed Merlin-Arthur protocols, cannot guarantee good completeness and soundness simultaneously, unless they use very large certificates. The main result of this paper is a distributed quantum Merlin-Arthur protocol enabling the nodes to collectively check the consistency of the replicas, based on small certificates, and in a single round of message exchange between neighbors, with short messages. In particular, the certificate-size is logarithmic in the size of the data set, which gives an exponential advantage over classical certification mechanisms. We propose yet another usage of a fundamental quantum primitive, called the SWAP test, in order to show our main result

    Brief Announcement: Distributed Quantum Proofs for Replicated Data

    Get PDF
    This paper tackles the issue of checking that all copies of a large data set replicated at several nodes of a network are identical. The fact that the replicas may be located at distant nodes prevents the system from verifying their equality locally, i.e., by having each node consult only nodes in its vicinity. On the other hand, it remains possible to assign certificates to the nodes, so that verifying the consistency of the replicas can be achieved locally. However, we show that, as the replicated data is large, classical certification mechanisms, including distributed Merlin-Arthur protocols, cannot guarantee good completeness and soundness simultaneously, unless they use very large certificates. The main result of this paper is a distributed quantum Merlin-Arthur protocol enabling the nodes to collectively check the consistency of the replicas, based on small certificates, and in a single round of message exchange between neighbors, with short messages. In particular, the certificate-size is logarithmic in the size of the data set, which gives an exponential advantage over classical certification mechanisms

    Bethe free-energy approximations for disordered quantum systems

    Get PDF
    Given a locally consistent set of reduced density matrices, we construct approximate density matrices which are globally consistent with the local density matrices we started from when the trial density matrix has a tree structure. We employ the cavity method of statistical physics to find the optimal density matrix representation by slowly decreasing the temperature in an annealing algorithm, or by minimizing an approximate Bethe free energy depending on the reduced density matrices and some cavity messages originated from the Bethe approximation of the entropy. We obtain the classical Bethe expression for the entropy within a naive (mean-field) approximation of the cavity messages, which is expected to work well at high temperatures. In the next order of the approximation, we obtain another expression for the Bethe entropy depending only on the diagonal elements of the reduced density matrices. In principle, we can improve the entropy approximation by considering more accurate cavity messages in the Bethe approximation of the entropy. We compare the annealing algorithm and the naive approximation of the Bethe entropy with exact and approximate numerical simulations for small and large samples of the random transverse Ising model on random regular graphs.Comment: 23 pages, 4 figures, 4 appendice

    Improved Lower Bounds for Locally Decodable Codes and Private Information Retrieval

    Full text link
    We prove new lower bounds for locally decodable codes and private information retrieval. We show that a 2-query LDC encoding n-bit strings over an l-bit alphabet, where the decoder only uses b bits of each queried position of the codeword, needs code length m = exp(Omega(n/(2^b Sum_{i=0}^b {l choose i}))) Similarly, a 2-server PIR scheme with an n-bit database and t-bit queries, where the user only needs b bits from each of the two l-bit answers, unknown to the servers, satisfies t = Omega(n/(2^b Sum_{i=0}^b {l choose i})). This implies that several known PIR schemes are close to optimal. Our results generalize those of Goldreich et al. who proved roughly the same bounds for linear LDCs and PIRs. Like earlier work by Kerenidis and de Wolf, our classical lower bounds are proved using quantum computational techniques. In particular, we give a tight analysis of how well a 2-input function can be computed from a quantum superposition of both inputs.Comment: 12 pages LaTeX, To appear in ICALP '0

    Better Bell inequalities (passion at a distance)

    Full text link
    I explain so-called quantum nonlocality experiments and discuss how to optimize them. Statistical tools from missing data maximum likelihood are crucial. New results are given on CGLMP, CH and ladder inequalities. Open problems are also discussed.Comment: Published at http://dx.doi.org/10.1214/074921707000000328 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    A Unifying Theory of Dark Energy and Dark Matter: Negative Masses and Matter Creation within a Modified Λ\LambdaCDM Framework

    Get PDF
    Dark energy and dark matter constitute 95% of the observable Universe. Yet the physical nature of these two phenomena remains a mystery. Einstein suggested a long-forgotten solution: gravitationally repulsive negative masses, which drive cosmic expansion and cannot coalesce into light-emitting structures. However, contemporary cosmological results are derived upon the reasonable assumption that the Universe only contains positive masses. By reconsidering this assumption, I have constructed a toy model which suggests that both dark phenomena can be unified into a single negative mass fluid. The model is a modified Λ\LambdaCDM cosmology, and indicates that continuously-created negative masses can resemble the cosmological constant and can flatten the rotation curves of galaxies. The model leads to a cyclic universe with a time-variable Hubble parameter, potentially providing compatibility with the current tension that is emerging in cosmological measurements. In the first three-dimensional N-body simulations of negative mass matter in the scientific literature, this exotic material naturally forms haloes around galaxies that extend to several galactic radii. These haloes are not cuspy. The proposed cosmological model is therefore able to predict the observed distribution of dark matter in galaxies from first principles. The model makes several testable predictions and seems to have the potential to be consistent with observational evidence from distant supernovae, the cosmic microwave background, and galaxy clusters. These findings may imply that negative masses are a real and physical aspect of our Universe, or alternatively may imply the existence of a superseding theory that in some limit can be modelled by effective negative masses. Both cases lead to the surprising conclusion that the compelling puzzle of the dark Universe may have been due to a simple sign error.Comment: Accepted for publication in Astronomy and Astrophysics (A&A). Videos of the simulations are available online at: https://goo.gl/rZN1P
    • …
    corecore