2,845 research outputs found

    Quantum Compression and Quantum Learning via Information Theory

    Get PDF
    This thesis consists of two parts: quantum compression and quantum learning theory. A common theme between these problems is that we study them through the lens of information theory. We first study the task of visible compression of an ensemble of quantum states with entanglement assistance in the one-shot setting. The protocols achieving the best compression use many more qubits of shared entanglement than the number of qubits in the states in the ensemble. Other compression protocols, with potentially higher communication cost, have entanglement cost bounded by the number of qubits in the given states. This motivates the question as to whether entanglement is truly necessary for compression, and if so, how much of it is needed. We show that an ensemble given by Jain, Radhakrishnan, and Sen (ICALP'03) cannot be compressed by more than a constant number of qubits without shared entanglement, while in the presence of shared entanglement, the communication cost of compression can be arbitrarily smaller than the entanglement cost. Next, we study the task of quantum state redistribution, the most general version of compression of quantum states. We design a protocol for this task with communication cost in terms of a measure of distance from quantum Markov chains. More precisely, the distance is defined in terms of quantum max-relative entropy and quantum hypothesis testing entropy. Our result is the first to connect quantum state redistribution and Markov chains and gives an operational interpretation for a possible one-shot analogue of quantum conditional mutual information. The communication cost of our protocol is lower than all previously known ones and asymptotically achieves the well-known rate of quantum conditional mutual information. In the last part, we focus on quantum algorithms for learning Boolean functions using quantum examples. We consider two commonly studied models of learning, namely, quantum PAC learning and quantum agnostic learning. We reproduce the optimal lower bounds by Arunachalam and de Wolf (JMLR’18) for the sample complexity of either of these models using information theory and spectral analysis. Our proofs are simpler than the previous ones and the techniques can be possibly extended to similar scenarios

    Asymptotic Compressibility of Entanglement and Classical Communication in Distributed Quantum Computation

    Full text link
    We consider implementations of a bipartite unitary on many pairs of unknown input states by local operation and classical communication assisted by shared entanglement. We investigate to what extent the entanglement cost and the classical communication cost can be compressed by allowing nonzero but vanishing error in the asymptotic limit of infinite pairs. We show that a lower bound on the minimal entanglement cost, the forward classical communication cost, and the backward classical communication cost per pair is given by the Schmidt strength of the unitary. We also prove that an upper bound on these three kinds of the cost is given by the amount of randomness that is required to partially decouple a tripartite quantum state associated with the unitary. In the proof, we construct a protocol in which quantum state merging is used. For generalized Clifford operators, we show that the lower bound and the upper bound coincide. We then apply our result to the problem of distributed compression of tripartite quantum states, and derive a lower and an upper bound on the optimal quantum communication rate required therein.Comment: Section II and VIII adde

    Identifying the Information Gain of a Quantum Measurement

    Get PDF
    We show that quantum-to-classical channels, i.e., quantum measurements, can be asymptotically simulated by an amount of classical communication equal to the quantum mutual information of the measurement, if sufficient shared randomness is available. This result generalizes Winter's measurement compression theorem for fixed independent and identically distributed inputs [Winter, CMP 244 (157), 2004] to arbitrary inputs, and more importantly, it identifies the quantum mutual information of a measurement as the information gained by performing it, independent of the input state on which it is performed. Our result is a generalization of the classical reverse Shannon theorem to quantum-to-classical channels. In this sense, it can be seen as a quantum reverse Shannon theorem for quantum-to-classical channels, but with the entanglement assistance and quantum communication replaced by shared randomness and classical communication, respectively. The proof is based on a novel one-shot state merging protocol for "classically coherent states" as well as the post-selection technique for quantum channels, and it uses techniques developed for the quantum reverse Shannon theorem [Berta et al., CMP 306 (579), 2011].Comment: v2: new result about non-feedback measurement simulation, 45 pages, 4 figure

    Strong converse theorems using R\'enyi entropies

    Full text link
    We use a R\'enyi entropy method to prove strong converse theorems for certain information-theoretic tasks which involve local operations and quantum or classical communication between two parties. These include state redistribution, coherent state merging, quantum state splitting, measurement compression with quantum side information, randomness extraction against quantum side information, and data compression with quantum side information. The method we employ in proving these results extends ideas developed by Sharma [arXiv:1404.5940], which he used to give a new proof of the strong converse theorem for state merging. For state redistribution, we prove the strong converse property for the boundary of the entire achievable rate region in the (e,q)(e,q)-plane, where ee and qq denote the entanglement cost and quantum communication cost, respectively. In the case of measurement compression with quantum side information, we prove a strong converse theorem for the classical communication cost, which is a new result extending the previously known weak converse. For the remaining tasks, we provide new proofs for strong converse theorems previously established using smooth entropies. For each task, we obtain the strong converse theorem from explicit bounds on the figure of merit of the task in terms of a R\'enyi generalization of the optimal rate. Hence, we identify candidates for the strong converse exponents for each task discussed in this paper. To prove our results, we establish various new entropic inequalities, which might be of independent interest. These involve conditional entropies and mutual information derived from the sandwiched R\'enyi divergence. In particular, we obtain novel bounds relating these quantities, as well as the R\'enyi conditional mutual information, to the fidelity of two quantum states.Comment: 40 pages, 5 figures; v4: Accepted for publication in Journal of Mathematical Physic

    One-shot lossy quantum data compression

    Get PDF
    We provide a framework for one-shot quantum rate distortion coding, in which the goal is to determine the minimum number of qubits required to compress quantum information as a function of the probability that the distortion incurred upon decompression exceeds some specified level. We obtain a one-shot characterization of the minimum qubit compression size for an entanglement-assisted quantum rate-distortion code in terms of the smooth max-information, a quantity previously employed in the one-shot quantum reverse Shannon theorem. Next, we show how this characterization converges to the known expression for the entanglement-assisted quantum rate distortion function for asymptotically many copies of a memoryless quantum information source. Finally, we give a tight, finite blocklength characterization for the entanglement-assisted minimum qubit compression size of a memoryless isotropic qubit source subject to an average symbol-wise distortion constraint.Comment: 36 page
    • …
    corecore