1,527 research outputs found

    R\'enyi Bounds on Information Combining

    Full text link
    Bounds on information combining are entropic inequalities that determine how the information, or entropy, of a set of random variables can change when they are combined in certain prescribed ways. Such bounds play an important role in information theory, particularly in coding and Shannon theory. The arguably most elementary kind of information combining is the addition of two binary random variables, i.e. a CNOT gate, and the resulting quantities are fundamental when investigating belief propagation and polar coding. In this work we will generalize the concept to R\'enyi entropies. We give optimal bounds on the conditional R\'enyi entropy after combination, based on a certain convexity or concavity property and discuss when this property indeed holds. Since there is no generally agreed upon definition of the conditional R\'enyi entropy, we consider four different versions from the literature. Finally, we discuss the application of these bounds to the polarization of R\'enyi entropies under polar codes.Comment: 14 pages, accepted for presentation at ISIT 202

    An improved rate region for the classical-quantum broadcast channel

    Full text link
    We present a new achievable rate region for the two-user binary-input classical-quantum broadcast channel. The result is a generalization of the classical Marton-Gelfand-Pinsker region and is provably larger than the best previously known rate region for classical-quantum broadcast channels. The proof of achievability is based on the recently introduced polar coding scheme and its generalization to quantum network information theory.Comment: 5 pages, double column, 1 figure, based on a result presented in the Master's thesis arXiv:1501.0373

    Event-Triggered Estimation of Linear Systems: An Iterative Algorithm and Optimality Properties

    Full text link
    This report investigates the optimal design of event-triggered estimation for first-order linear stochastic systems. The problem is posed as a two-player team problem with a partially nested information pattern. The two players are given by an estimator and an event-trigger. The event-trigger has full state information and decides, whether the estimator shall obtain the current state information by transmitting it through a resource constrained channel. The objective is to find an optimal trade-off between the mean squared estimation error and the expected transmission rate. The proposed iterative algorithm alternates between optimizing one player while fixing the other player. It is shown that the solution of the algorithm converges to a linear predictor and a symmetric threshold policy, if the densities of the initial state and the noise variables are even and radially decreasing functions. The effectiveness of the approach is illustrated on a numerical example. In case of a multimodal distribution of the noise variables a significant performance improvement can be achieved compared to a separate design that assumes a linear prediction and a symmetric threshold policy

    Convexity and Operational Interpretation of the Quantum Information Bottleneck Function

    Full text link
    In classical information theory, the information bottleneck method (IBM) can be regarded as a method of lossy data compression which focusses on preserving meaningful (or relevant) information. As such it has recently gained a lot of attention, primarily for its applications in machine learning and neural networks. A quantum analogue of the IBM has recently been defined, and an attempt at providing an operational interpretation of the so-called quantum IB function as an optimal rate of an information-theoretic task, has recently been made by Salek et al. However, the interpretation given in that paper has a couple of drawbacks; firstly its proof is based on a conjecture that the quantum IB function is convex, and secondly, the expression for the rate function involves certain entropic quantities which occur explicitly in the very definition of the underlying information-theoretic task, thus making the latter somewhat contrived. We overcome both of these drawbacks by first proving the convexity of the quantum IB function, and then giving an alternative operational interpretation of it as the optimal rate of a bona fide information-theoretic task, namely that of quantum source coding with quantum side information at the decoder, and relate the quantum IB function to the rate region of this task. We similarly show that the related privacy funnel function is convex (both in the classical and quantum case). However, we comment that it is unlikely that the quantum privacy funnel function can characterize the optimal asymptotic rate of an information theoretic task, since even its classical version lacks a certain additivity property which turns out to be essential.Comment: 17 pages, 7 figures; v2: improved presentation and explanations, one new figure; v3: Restructured manuscript. Theorem 2 has been found previously in work by Hsieh and Watanabe; it is now correctly attribute
    • …
    corecore