334 research outputs found

    Simpler Proofs of Quantumness

    Get PDF
    A proof of quantumness is a method for provably demonstrating (to a classical verifier) that a quantum device can perform computational tasks that a classical device with comparable resources cannot. Providing a proof of quantumness is the first step towards constructing a useful quantum computer. There are currently three approaches for exhibiting proofs of quantumness: (i) Inverting a classically-hard one-way function (e.g. using Shor's algorithm). This seems technologically out of reach. (ii) Sampling from a classically-hard-to-sample distribution (e.g. BosonSampling). This may be within reach of near-term experiments, but for all such tasks known verification requires exponential time. (iii) Interactive protocols based on cryptographic assumptions. The use of a trapdoor scheme allows for efficient verification, and implementation seems to require much less resources than (i), yet still more than (ii). In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the Fiat-Shamir paradigm.) We give a two-message (challenge-response) proof of quantumness based on any trapdoor claw-free function. In contrast to earlier proposals we do not need an adaptive hard-core bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.Comment: TQC 202

    Proofs of Quantumness from Trapdoor Permutations

    Get PDF
    Assume that Alice can do only classical probabilistic polynomial-time computing while Bob can do quantum polynomial-time computing. Alice and Bob communicate over only classical channels, and finally Bob gets a state ∣x0⟩+∣x1⟩|x_0\rangle+|x_1\rangle with some bit strings x0x_0 and x1x_1. Is it possible that Alice can know {x0,x1}\{x_0,x_1\} but Bob cannot? Such a task, called {\it remote state preparations}, is indeed possible under some complexity assumptions, and is bases of many quantum cryptographic primitives such as proofs of quantumness, (classical-client) blind quantum computing, (classical) verifications of quantum computing, and quantum money. A typical technique to realize remote state preparations is to use 2-to-1 trapdoor collision resistant hash functions: Alice sends a 2-to-1 trapdoor collision resistant hash function ff to Bob, and Bob evaluates it coherently, i.e., Bob generates ∑x∣x⟩∣f(x)⟩\sum_x|x\rangle|f(x)\rangle. Bob measures the second register to get the measurement result yy, and sends yy to Alice. Bob\u27s post-measurement state is ∣x0⟩+∣x1⟩|x_0\rangle+|x_1\rangle, where f(x0)=f(x1)=yf(x_0)=f(x_1)=y. With the trapdoor, Alice can learn {x0,x1}\{x_0,x_1\} from yy, but due to the collision resistance, Bob cannot. This Alice\u27s advantage can be leveraged to realize the quantum cryptographic primitives listed above. It seems that the collision resistance is essential here. In this paper, surprisingly, we show that the collision resistance is not necessary for a restricted case: we show that (non-verifiable) remote state preparations of ∣x0⟩+∣x1⟩|x_0\rangle+|x_1\rangle secure against {\it classical} probabilistic polynomial-time Bob can be constructed from classically-secure (full-domain) trapdoor permutations. Trapdoor permutations are not likely to imply the collision resistance, because black-box reductions from collision-resistant hash functions to trapdoor permutations are known to be impossible. As an application of our result, we construct proofs of quantumness from classically-secure (full-domain) trapdoor permutations

    Incompatible Multiple Consistent Sets of Histories and Measures of Quantumness

    Get PDF
    In the consistent histories (CH) approach to quantum theory probabilities are assigned to histories subject to a consistency condition of negligible interference. The approach has the feature that a given physical situation admits multiple sets of consistent histories that cannot in general be united into a single consistent set, leading to a number of counter-intuitive or contrary properties if propositions from different consistent sets are combined indiscriminately. An alternative viewpoint is proposed in which multiple consistent sets are classified according to whether or not there exists any unifying probability for combinations of incompatible sets which replicates the consistent histories result when restricted to a single consistent set. A number of examples are exhibited in which this classification can be made, in some cases with the assistance of the Bell, CHSH or Leggett-Garg inequalities together with Fine's theorem. When a unifying probability exists logical deductions in different consistent sets can in fact be combined, an extension of the "single framework rule". It is argued that this classification coincides with intuitive notions of the boundary between classical and quantum regimes and in particular, the absence of a unifying probability for certain combinations of consistent sets is regarded as a measure of the "quantumness" of the system. The proposed approach and results are closely related to recent work on the classification of quasi-probabilities and this connection is discussed.Comment: 29 pages. Second revised version with discussion of the sample space and non-uniqueness of the unifying probability and small errors correcte

    Entanglement 25 years after Quantum Teleportation: testing joint measurements in quantum networks

    Full text link
    Twenty-five years after the invention of quantum teleportation, the concept of entanglement gained enormous popularity. This is especially nice to those who remember that entanglement was not even taught at universities until the 1990's. Today, entanglement is often presented as a resource, the resource of quantum information science and technology. However, entanglement is exploited twice in quantum teleportation. First, entanglement is the `quantum teleportation channel', i.e. entanglement between distant systems. Second, entanglement appears in the eigenvectors of the joint measurement that Alice, the sender, has to perform jointly on the quantum state to be teleported and her half of the `quantum teleportation channel', i.e. entanglement enabling entirely new kinds of quantum measurements. I emphasize how poorely this second kind of entanglement is understood. In particular, I use quantum networks in which each party connected to several nodes performs a joint measurement to illustrate that the quantumness of such joint measurements remains elusive, escaping today's available tools to detect and quantify it.Comment: Feature paper, Celebrating the Silver Jubilee of Teleportation (7 pages). V2 (March'19): Many typos corrected (sorry) and a few comments adde

    Post-Quantum κ\kappa-to-1 Trapdoor Claw-free Functions from Extrapolated Dihedral Cosets

    Full text link
    \emph{Noisy trapdoor claw-free function} (NTCF) as a powerful post-quantum cryptographic tool can efficiently constrain actions of untrusted quantum devices. However, the original NTCF is essentially \emph{2-to-1} one-way function (NTCF21^1_2). In this work, we attempt to further extend the NTCF21^1_2 to achieve \emph{many-to-one} trapdoor claw-free functions with polynomial bounded preimage size. Specifically, we focus on a significant extrapolation of NTCF21^1_2 by drawing on extrapolated dihedral cosets, thereby giving a model of NTCFκ1^1_{\kappa} where κ\kappa is a polynomial integer. Then, we present an efficient construction of NTCFκ1^1_{\kappa} assuming \emph{quantum hardness of the learning with errors (LWE)} problem. We point out that NTCF can be used to bridge the LWE and the dihedral coset problem (DCP). By leveraging NTCF21^1_2 (resp. NTCFκ1^1_{\kappa}), our work reveals a new quantum reduction path from the LWE problem to the DCP (resp. extrapolated DCP). Finally, we demonstrate the NTCFκ1^1_{\kappa} can naturally be reduced to the NTCF21^1_2, thereby achieving the same application for proving the quantumness.Comment: 34 pages, 7 figure

    Self-Testing of a Single Quantum Device Under Computational Assumptions

    Get PDF
    Self-testing is a method to characterise an arbitrary quantum system based only on its classical input-output correlations, and plays an important role in device-independent quantum information processing as well as quantum complexity theory. Prior works on self-testing require the assumption that the system’s state is shared among multiple parties that only perform local measurements and cannot communicate. Here, we replace the setting of multiple non-communicating parties, which is difficult to enforce in practice, by a single computationally bounded party. Specifically, we construct a protocol that allows a classical verifier to robustly certify that a single computationally bounded quantum device must have prepared a Bell pair and performed single-qubit measurements on it, up to a change of basis applied to both the device’s state and measurements. This means that under computational assumptions, the verifier is able to certify the presence of entanglement, a property usually closely associated with two separated subsystems, inside a single quantum device. To achieve this, we build on techniques first introduced by Brakerski et al. (2018) and Mahadev (2018) which allow a classical verifier to constrain the actions of a quantum device assuming the device does not break post-quantum cryptography.ISSN:1868-896
    • …
    corecore