292 research outputs found

    Relations between random coding exponents and the statistical physics of random codes

    Full text link
    The partition function pertaining to finite--temperature decoding of a (typical) randomly chosen code is known to have three types of behavior, corresponding to three phases in the plane of rate vs. temperature: the {\it ferromagnetic phase}, corresponding to correct decoding, the {\it paramagnetic phase}, of complete disorder, which is dominated by exponentially many incorrect codewords, and the {\it glassy phase} (or the condensed phase), where the system is frozen at minimum energy and dominated by subexponentially many incorrect codewords. We show that the statistical physics associated with the two latter phases are intimately related to random coding exponents. In particular, the exponent associated with the probability of correct decoding at rates above capacity is directly related to the free energy in the glassy phase, and the exponent associated with probability of error (the error exponent) at rates below capacity, is strongly related to the free energy in the paramagnetic phase. In fact, we derive alternative expressions of these exponents in terms of the corresponding free energies, and make an attempt to obtain some insights from these expressions. Finally, as a side result, we also compare the phase diagram associated with a simple finite-temperature universal decoder for discrete memoryless channels, to that of the finite--temperature decoder that is aware of the channel statistics.Comment: 26 pages, 2 figures, submitted to IEEE Transactions on Information Theor

    LIBQIF: a quantitative information flow C++ toolkit library

    Get PDF
    A fundamental concern in computer security is to control information ow, whether to protect con dential information from being leaked, or to protect trusted information from being tainted. A classic approach is to try to enforce non-interference. Unfortunately, achieving non-interference is often not possible, because often there is a correlation between secrets and observables, either by design or due to some physical feature of the computation (side channels). One promising approach to relaxing noninterference, is to develop a quantitative theory of information ow that allows us to reason about how much information is being leaked, thus paving the way to the possibility of tolerating small leaks. In this work, we aim at developing a quantitative information ow C++ toolkit library, implementing several algorithms from the areas of QIF (more speci cally from four theories: Shannon Entropy, Min-Entropy, Guessing Entropy and G-Leakage) and Di erential Privacy. The library can be used by academics to facilitate research in these areas, as well as by students as a learning tool. A primary use of the library is to compute QIF measures as well as to generate plots, useful for understanding their behavior. Moreover, the library allows users to compute optimal di erentially private mechanisms, compare the utility of known mechanisms, compare the leakage of channels, compute gain functions that separate channels, and various other functionalities related to QIF.Trabajo final de carreraSociedad Argentina de Informática e Investigación Operativa (SADIO

    LIBQIF: a quantitative information flow C++ toolkit library

    Get PDF
    A fundamental concern in computer security is to control information ow, whether to protect con dential information from being leaked, or to protect trusted information from being tainted. A classic approach is to try to enforce non-interference. Unfortunately, achieving non-interference is often not possible, because often there is a correlation between secrets and observables, either by design or due to some physical feature of the computation (side channels). One promising approach to relaxing noninterference, is to develop a quantitative theory of information ow that allows us to reason about how much information is being leaked, thus paving the way to the possibility of tolerating small leaks. In this work, we aim at developing a quantitative information ow C++ toolkit library, implementing several algorithms from the areas of QIF (more speci cally from four theories: Shannon Entropy, Min-Entropy, Guessing Entropy and G-Leakage) and Di erential Privacy. The library can be used by academics to facilitate research in these areas, as well as by students as a learning tool. A primary use of the library is to compute QIF measures as well as to generate plots, useful for understanding their behavior. Moreover, the library allows users to compute optimal di erentially private mechanisms, compare the utility of known mechanisms, compare the leakage of channels, compute gain functions that separate channels, and various other functionalities related to QIF.Trabajo final de carreraSociedad Argentina de Informática e Investigación Operativa (SADIO

    Efficient Approximation of Quantum Channel Capacities

    Full text link
    We propose an iterative method for approximating the capacity of classical-quantum channels with a discrete input alphabet and a finite dimensional output, possibly under additional constraints on the input distribution. Based on duality of convex programming, we derive explicit upper and lower bounds for the capacity. To provide an ε\varepsilon-close estimate to the capacity, the presented algorithm requires O((NM)M3log(N)1/2ε)O(\tfrac{(N \vee M) M^3 \log(N)^{1/2}}{\varepsilon}), where NN denotes the input alphabet size and MM the output dimension. We then generalize the method for the task of approximating the capacity of classical-quantum channels with a bounded continuous input alphabet and a finite dimensional output. For channels with a finite dimensional quantum mechanical input and output, the idea of a universal encoder allows us to approximate the Holevo capacity using the same method. In particular, we show that the problem of approximating the Holevo capacity can be reduced to a multidimensional integration problem. For families of quantum channels fulfilling a certain assumption we show that the complexity to derive an ε\varepsilon-close solution to the Holevo capacity is subexponential or even polynomial in the problem size. We provide several examples to illustrate the performance of the approximation scheme in practice.Comment: 36 pages, 1 figur

    Channels That Die

    Full text link
    Given the possibility of communication systems failing catastrophically, we investigate limits to communicating over channels that fail at random times. These channels are finite-state semi-Markov channels. We show that communication with arbitrarily small probability of error is not possible. Making use of results in finite blocklength channel coding, we determine sequences of blocklengths that optimize transmission volume communicated at fixed maximum message error probabilities. We provide a partial ordering of communication channels. A dynamic programming formulation is used to show the structural result that channel state feedback does not improve performance

    Optimal information storage : nonsequential sources and neural channels

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.MIT Institute Archives copy: pages 101-163 bound in reverse order.Includes bibliographical references (p. 141-163).Information storage and retrieval systems are communication systems from the present to the future and fall naturally into the framework of information theory. The goal of information storage is to preserve as much signal fidelity under resource constraints as possible. The information storage theorem delineates average fidelity and average resource values that are achievable and those that are not. Moreover, observable properties of optimal information storage systems and the robustness of optimal systems to parameter mismatch may be determined. In this thesis, we study the physical properties of a neural information storage channel and also the fundamental bounds on the storage of sources that have nonsequential semantics. Experimental investigations have revealed that synapses in the mammalian brain possess unexpected properties. Adopting the optimization approach to biology, we cast the brain as an optimal information storage system and propose a theoretical framework that accounts for many of these physical properties. Based on previous experimental and theoretical work, we use volume as a limited resource and utilize the empirical relationship between volume anrid synaptic weight.(cont.) Our scientific hypotheses are based on maximizing information storage capacity per unit cost. We use properties of the capacity-cost function, e-capacity cost approximations, and measure matching to develop optimization principles. We find that capacity-achieving input distributions not only explain existing experimental measurements but also make non-trivial predictions about the physical structure of the brain. Numerous information storage applications have semantics such that the order of source elements is irrelevant, so the source sequence can be treated as a multiset. We formulate fidelity criteria that consider asymptotically large multisets and give conclusive, but trivialized, results in rate distortion theory. For fidelity criteria that consider fixed-size multisets. we give some conclusive results in high-rate quantization theory, low-rate quantization. and rate distortion theory. We also provide bounds on the rate-distortion function for other nonsequential fidelity criteria problems. System resource consumption can be significantly reduced by recognizing the correct invariance properties and semantics of the information storage task at hand.by Lav R. Varshney.S.M
    corecore