62 research outputs found

    Finite-Block-Length Analysis in Classical and Quantum Information Theory

    Full text link
    Coding technology is used in several information processing tasks. In particular, when noise during transmission disturbs communications, coding technology is employed to protect the information. However, there are two types of coding technology: coding in classical information theory and coding in quantum information theory. Although the physical media used to transmit information ultimately obey quantum mechanics, we need to choose the type of coding depending on the kind of information device, classical or quantum, that is being used. In both branches of information theory, there are many elegant theoretical results under the ideal assumption that an infinitely large system is available. In a realistic situation, we need to account for finite size effects. The present paper reviews finite size effects in classical and quantum information theory with respect to various topics, including applied aspects

    Divergence radii and the strong converse exponent of classical-quantum channel coding with constant compositions

    Get PDF
    There are different inequivalent ways to define the R\'enyi capacity of a channel for a fixed input distribution PP. In a 1995 paper Csisz\'ar has shown that for classical discrete memoryless channels there is a distinguished such quantity that has an operational interpretation as a generalized cutoff rate for constant composition channel coding. We show that the analogous notion of R\'enyi capacity, defined in terms of the sandwiched quantum R\'enyi divergences, has the same operational interpretation in the strong converse problem of classical-quantum channel coding. Denoting the constant composition strong converse exponent for a memoryless classical-quantum channel WW with composition PP and rate RR as sc(W,R,P)sc(W,R,P), our main result is that sc(W,R,P)=supα>1α1α[Rχα(W,P)], sc(W,R,P)=\sup_{\alpha>1}\frac{\alpha-1}{\alpha}\left[R-\chi_{\alpha}^*(W,P)\right], where χα(W,P)\chi_{\alpha}^*(W,P) is the PP-weighted sandwiched R\'enyi divergence radius of the image of the channel.Comment: 46 pages. V7: Added the strong converse exponent with cost constrain

    Min- and Max-Entropy in Infinite Dimensions

    Get PDF
    We consider an extension of the conditional min- and max-entropies to infinite-dimensional separable Hilbert spaces. We show that these satisfy characterizing properties known from the finite-dimensional case, and retain information-theoretic operational interpretations, e.g., the min-entropy as maximum achievable quantum correlation, and the max-entropy as decoupling accuracy. We furthermore generalize the smoothed versions of these entropies and prove an infinite-dimensional quantum asymptotic equipartition property. To facilitate these generalizations we show that the min- and max-entropy can be expressed in terms of convergent sequences of finite-dimensional min- and max-entropies, which provides a convenient technique to extend proofs from the finite to the infinite-dimensional settin

    A precise bare simulation approach to the minimization of some distances. Foundations

    Full text link
    In information theory -- as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition -- many flexibilizations of the omnipresent Kullback-Leibler information distance (relative entropy) and of the closely related Shannon entropy have become frequently used tools. To tackle corresponding constrained minimization (respectively maximization) problems by a newly developed dimension-free bare (pure) simulation method, is the main goal of this paper. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). As a side effect, we also derive an innovative way of constructing new useful distances/divergences. To illustrate the core of our approach, we present numerous examples. The potential for widespread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences and entropies in various different research fields (which may also serve as an interdisciplinary interface)

    One-shot information-theoretical approaches to fluctuation theorems

    Full text link
    Traditional thermodynamics governs the behaviour of large systems that evolve between states of thermal equilibrium. For these large systems, the mean values of thermodynamic quantities (such as work, heat and entropy) provide a good characterisation of the process. Conversely, there is ever-increasing interest in the thermal behaviour of systems that evolve quickly and far from equilibrium, and that are too small for their behaviour to be well-described by mean values. Two major fields of modern thermodynamics seek to tackle such systems: non-equilibrium thermodynamics, and the nascent field of one-shot statistical mechanics. The former provides tools such as fluctuation theorems, whereas the latter applies "one-shot" R\'enyi entropies to thermal contexts. In this chapter to the upcoming book "Thermodynamics in the quantum regime - Recent progress and outlook" (Springer International Publishing), I provide a gentle introduction to recent research that draws from both fields: the application of one-shot information theory to fluctuation theorems.Comment: As a chapter of: F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (eds.), "Thermodynamics in the quantum regime - Recent progress and outlook", (Springer International Publishing
    corecore