81 research outputs found
Device-independent Certification of One-shot Distillable Entanglement
Entanglement sources that produce many entangled states act as a main
component in applications exploiting quantum physics such as quantum
communication and cryptography. Realistic sources are inherently noisy, cannot
run for an infinitely long time, and do not necessarily behave in an
independent and identically distributed manner. An important question then
arises -- how can one test, or certify, that a realistic source produces high
amounts of entanglement? Crucially, a meaningful and operational solution
should allow us to certify the entanglement which is available for further
applications after performing the test itself (in contrast to assuming the
availability of an additional source which can produce more entangled states,
identical to those which were tested). To answer the above question and lower
bound the amount of entanglement produced by an uncharacterised source, we
present a protocol that can be run by interacting classically with
uncharacterised (but not entangled to one another) measurement devices used to
measure the states produced by the source. A successful run of the protocol
implies that the remaining quantum state has high amounts of one-shot
distillable entanglement. That is, one can distill many maximally entangled
states out of the single remaining state. Importantly, our protocol can
tolerate noise and, thus, certify entanglement produced by realistic sources.
With the above properties, the protocol acts as the first "operational
device-independent entanglement certification protocol" and allows one to test
and benchmark uncharacterised entanglement sources which may be otherwise
incomparable
Non-Signaling Parallel Repetition Using de Finetti Reductions
In the context of multiplayer games, the parallel repetition problem can be phrased as follows: given a game G with optimal winning probability 1 - α and its repeated version G^n (in which n games are played together, in parallel), can the players use strategies that are substantially better than ones in which each game is played independently? This question is relevant in physics for the study of correlations and plays an important role in computer science in the context of complexity and cryptography. In this paper, the case of multiplayer non-signaling games is considered, i.e., the only restriction on the players is that they are not allowed to communicate during the game. For complete-support games (games where all possible combinations of questions have non-zero probability to be asked) with any number of players, we prove a threshold theorem stating that the probability that non-signaling players win more than a fraction 1-α+β of the n games is exponentially small in nβ^2 for every 0 ≤ β ≤ α. For games with incomplete support, we derive a similar statement for a slightly modified form of repetition. The result is proved using a new technique based on a recent de Finetti theorem, which allows us to avoid central technical difficulties that arise in standard proofs of parallel repetition theorems
de Finetti reductions for correlations
When analysing quantum information processing protocols one has to deal with
large entangled systems, each consisting of many subsystems. To make this
analysis feasible, it is often necessary to identify some additional structure.
de Finetti theorems provide such a structure for the case where certain
symmetries hold. More precisely, they relate states that are invariant under
permutations of subsystems to states in which the subsystems are independent of
each other. This relation plays an important role in various areas, e.g., in
quantum cryptography or state tomography, where permutation invariant systems
are ubiquitous. The known de Finetti theorems usually refer to the internal
quantum state of a system and depend on its dimension. Here we prove a
different de Finetti theorem where systems are modelled in terms of their
statistics under measurements. This is necessary for a large class of
applications widely considered today, such as device independent protocols,
where the underlying systems and the dimensions are unknown and the entire
analysis is based on the observed correlations.Comment: 5+13 pages; second version closer to the published one; new titl
Quantum-Proof Multi-Source Randomness Extractors in the Markov Model
Randomness extractors, widely used in classical and quantum cryptography and other fields of computer science, e.g., derandomization, are functions which generate almost uniform randomness from weak sources of randomness. In the quantum setting one must take into account the quantum side information held by an adversary which might be used to break the security of the extractor. In the case of seeded extractors the presence of quantum side information has been extensively studied. For multi-source extractors one can easily see that high conditional min-entropy is not sufficient to guarantee security against arbitrary side information, even in the classical case. Hence, the interesting question is under which models of (both quantum and classical) side information multi-source extractors remain secure. In this work we suggest a natural model of side information, which we call the Markov model, and prove that any multi-source extractor remains secure in the presence of quantum side information of this type (albeit with weaker parameters). This improves on previous results in which more restricted models were considered or the security of only some types of extractors was shown
Simple and tight device-independent security proofs
Device-independent security is the gold standard for quantum cryptography: not only is security based entirely on the laws of quantum mechanics, but it holds irrespective of any a priori assumptions on the quantum devices used in a protocol, making it particularly applicable in a quantum-wary environment. While the existence of device-independent protocols for tasks such as randomness expansion and quantum key distribution has recently been established, the underlying proofs of security remain very challenging, yield rather poor key rates, and demand very high quality quantum devices, thus making them all but impossible to implement in practice. We introduce a technique for the analysis of device-independent cryptographic protocols. We provide a flexible protocol and give a security proof that provides quantitative bounds that are asymptotically tight, even in the presence of general quantum adversaries. At a high level our approach amounts to establishing a reduction to the scenario in which the untrusted device operates in an identical and independent way in each round of the protocol. This is achieved by leveraging the sequential nature of the protocol and makes use of a newly developed tool, the “entropy accumulation theorem” of Dupuis, Fawzi, and Renner [Entropy Accumulation, preprint, 2016]. As concrete applications we give simple and modular security proofs for device-independent quantum key distribution and randomness expansion protocols based on the CHSH inequality. For both tasks, we establish essentially optimal asymptotic key rates and noise tolerance. In view of recent experimental progress, which has culminated in loophole-free Bell tests, it is likely that these protocols can be practically implemented in the near future
- …