18,943 research outputs found
Entanglement cost and quantum channel simulation
This paper proposes a revised definition for the entanglement cost of a
quantum channel . In particular, it is defined here to be the
smallest rate at which entanglement is required, in addition to free classical
communication, in order to simulate calls to , such that the
most general discriminator cannot distinguish the calls to
from the simulation. The most general discriminator is one who tests the
channels in a sequential manner, one after the other, and this discriminator is
known as a quantum tester [Chiribella et al., Phys. Rev. Lett., 101, 060401
(2008)] or one who is implementing a quantum co-strategy [Gutoski et al., Symp.
Th. Comp., 565 (2007)]. As such, the proposed revised definition of
entanglement cost of a quantum channel leads to a rate that cannot be smaller
than the previous notion of a channel's entanglement cost [Berta et al., IEEE
Trans. Inf. Theory, 59, 6779 (2013)], in which the discriminator is limited to
distinguishing parallel uses of the channel from the simulation. Under this
revised notion, I prove that the entanglement cost of certain
teleportation-simulable channels is equal to the entanglement cost of their
underlying resource states. Then I find single-letter formulas for the
entanglement cost of some fundamental channel models, including dephasing,
erasure, three-dimensional Werner--Holevo channels, epolarizing channels
(complements of depolarizing channels), as well as single-mode pure-loss and
pure-amplifier bosonic Gaussian channels. These examples demonstrate that the
resource theory of entanglement for quantum channels is not reversible.
Finally, I discuss how to generalize the basic notions to arbitrary resource
theories.Comment: 28 pages, 7 figure
Postselection threshold against biased noise
The highest current estimates for the amount of noise a quantum computer can
tolerate are based on fault-tolerance schemes relying heavily on postselecting
on no detected errors. However, there has been no proof that these schemes give
even a positive tolerable noise threshold. A technique to prove a positive
threshold, for probabilistic noise models, is presented. The main idea is to
maintain strong control over the distribution of errors in the quantum state at
all times. This distribution has correlations which conceivably could grow out
of control with postselection. But in fact, the error distribution can be
written as a mixture of nearby distributions each satisfying strong
independence properties, so there are no correlations for postselection to
amplify.Comment: 13 pages, FOCS 2006; conference versio
Strong and uniform convergence in the teleportation simulation of bosonic Gaussian channels
In the literature on the continuous-variable bosonic teleportation protocol
due to [Braunstein and Kimble, Phys. Rev. Lett., 80(4):869, 1998], it is often
loosely stated that this protocol converges to a perfect teleportation of an
input state in the limit of ideal squeezing and ideal detection, but the exact
form of this convergence is typically not clarified. In this paper, I
explicitly clarify that the convergence is in the strong sense, and not the
uniform sense, and furthermore, that the convergence occurs for any input state
to the protocol, including the infinite-energy Basel states defined and
discussed here. I also prove, in contrast to the above result, that the
teleportation simulations of pure-loss, thermal, pure-amplifier, amplifier, and
additive-noise channels converge both strongly and uniformly to the original
channels, in the limit of ideal squeezing and detection for the simulations.
For these channels, I give explicit uniform bounds on the accuracy of their
teleportation simulations. I then extend these uniform convergence results to
particular multi-mode bosonic Gaussian channels. These convergence statements
have important implications for mathematical proofs that make use of the
teleportation simulation of bosonic Gaussian channels, some of which have to do
with bounding their non-asymptotic secret-key-agreement capacities. As a
byproduct of the discussion given here, I confirm the correctness of the proof
of such bounds from my joint work with Berta and Tomamichel from [Wilde,
Tomamichel, Berta, IEEE Trans. Inf. Theory 63(3):1792, March 2017].
Furthermore, I show that it is not necessary to invoke the energy-constrained
diamond distance in order to confirm the correctness of this proof.Comment: 19 pages, 3 figure
Quantum enigma machines and the locking capacity of a quantum channel
The locking effect is a phenomenon which is unique to quantum information
theory and represents one of the strongest separations between the classical
and quantum theories of information. The Fawzi-Hayden-Sen (FHS) locking
protocol harnesses this effect in a cryptographic context, whereby one party
can encode n bits into n qubits while using only a constant-size secret key.
The encoded message is then secure against any measurement that an eavesdropper
could perform in an attempt to recover the message, but the protocol does not
necessarily meet the composability requirements needed in quantum key
distribution applications. In any case, the locking effect represents an
extreme violation of Shannon's classical theorem, which states that
information-theoretic security holds in the classical case if and only if the
secret key is the same size as the message. Given this intriguing phenomenon,
it is of practical interest to study the effect in the presence of noise, which
can occur in the systems of both the legitimate receiver and the eavesdropper.
This paper formally defines the locking capacity of a quantum channel as the
maximum amount of locked information that can be reliably transmitted to a
legitimate receiver by exploiting many independent uses of a quantum channel
and an amount of secret key sublinear in the number of channel uses. We provide
general operational bounds on the locking capacity in terms of other well-known
capacities from quantum Shannon theory. We also study the important case of
bosonic channels, finding limitations on these channels' locking capacity when
coherent-state encodings are employed and particular locking protocols for
these channels that might be physically implementable.Comment: 37 page
Simulating Hamiltonians in Quantum Networks: Efficient Schemes and Complexity Bounds
We address the problem of simulating pair-interaction Hamiltonians in n node
quantum networks where the subsystems have arbitrary, possibly different,
dimensions. We show that any pair-interaction can be used to simulate any other
by applying sequences of appropriate local control sequences. Efficient schemes
for decoupling and time reversal can be constructed from orthogonal arrays.
Conditions on time optimal simulation are formulated in terms of spectral
majorization of matrices characterizing the coupling parameters. Moreover, we
consider a specific system of n harmonic oscillators with bilinear interaction.
In this case, decoupling can efficiently be achieved using the combinatorial
concept of difference schemes. For this type of interactions we present optimal
schemes for inversion.Comment: 19 pages, LaTeX2
Classical Computation in the Quantum World
Quantum computation is by far the most powerful computational model allowed by the laws of physics. By carefully manipulating microscopic systems governed by quantum mechanics, one can efficiently solve computational problems that may be classically intractable; on the contrary, such speed-ups are rarely possible without the help of classical computation, since most quantum algorithms heavily rely on subroutines that are purely classical. A better understanding of the relationship between classical and quantum computation is indispensable, in particular in an era where the first quantum device exceeding classical computational power is within reach.
In the first part of the thesis, we study some differences between classical and quantum computation. We first show that quantum cryptographic hashing is maximally resilient against classical leakage, a property beyond reach for any classical hash function. Next, we consider the limitation of strong (amplitude-wise) simulation of quantum computation. We prove an unconditional and explicit complexity lower bound for a category of simulations called monotone strong simulation and further prove conditional complexity lower bounds for general strong simulation techniques. Both results indicate that strong simulation is fundamentally unscalable.
In the second part of the thesis, we propose classical algorithms that facilitate quantum computing. We propose a new classical algorithm for the synthesis of a quantum algorithm paradigm called quantum signal processing. Empirically, our algorithm demonstrates numerical stability and acceleration of more than one magnitude compared to state-of-the-art algorithms. Finally, we propose a randomized algorithm for transversally switching between arbitrary stabilizer quantum error-correcting codes. It has the property of preserving the code distance and thus might prove useful for designing fault-tolerant code-switching schemes.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149943/1/cupjinh_1.pd
- …