235,199 research outputs found

    Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective

    Get PDF
    We revisit the information-theoretic analysis of bit-interleaved coded modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The mismatched decoding model is well-defined for finite, yet arbitrary, block lengths, and naturally captures the channel memory among the bits belonging to the same symbol. We give two independent proofs of the achievability of the BICM capacity calculated by Caire et al. where BICM was modeled as a set of independent parallel binary-input channels whose output is the bitwise log-likelihood ratio. Our first achievability proof uses typical sequences, and shows that due to the random coding construction, the interleaver is not required. The second proof is based on the random coding error exponents with mismatched decoding, where the largest achievable rate is the generalized mutual information. We show that the generalized mutual information of the mismatched decoder coincides with the infinite-interleaver BICM capacity. We also show that the error exponent -and hence the cutoff rate- of the BICM mismatched decoder is upper bounded by that of coded modulation and may thus be lower than in the infinite-interleaved model. We also consider the mutual information appearing in the analysis of iterative decoding of BICM with EXIT charts. We show that the corresponding symbol metric has knowledge of the transmitted symbol and the EXIT mutual information admits a representation as a pseudo-generalized mutual information, which is in general not achievable. A different symbol decoding metric, for which the extrinsic side information refers to the hypothesized symbol, induces a generalized mutual information lower than the coded modulation capacity.Comment: submitted to the IEEE Transactions on Information Theory. Conference version in 2008 IEEE International Symposium on Information Theory, Toronto, Canada, July 200

    Efficiency fluctuations and noise induced refrigerator-to-heater transition in information engines

    Get PDF
    Understanding noisy information engines is a fundamental problem of non-equilibrium physics, particularly in biomolecular systems agitated by thermal and active fluctuations in the cell. By the generalized second law of thermodynamics, the efficiency of these engines is bounded by the mutual information passing through their noisy feedback loop. Yet, direct measurement of the interplay between mutual information and energy has so far been elusive. To allow such examination, we explore here the entire phase-space of a noisy colloidal information engine, and study efficiency fluctuations due to the stochasticity of the mutual information and extracted work. We find that the average efficiency is maximal for non-zero noise level, at which the distribution of efficiency switches from bimodal to unimodal, and the stochastic efficiency often exceeds unity. We identify a line of anomalous, noise-driven equilibrium states that defines a refrigerator-to-heater transition, and test the generalized integral fluctuation theorem for continuous engines

    Entanglement and nonextensive statistics

    Get PDF
    It is presented a generalization of the von Neumann mutual information in the context of Tsallis' nonextensive statistics. As an example, entanglement between two (two-level) quantum subsystems is discussed. Important changes occur in the generalized mutual information, which measures the degree of entanglement, depending on the entropic index q.Comment: 8 pages, LaTex, 4 figure

    Mutual Information-based Generalized Category Discovery

    Full text link
    We introduce an information-maximization approach for the Generalized Category Discovery (GCD) problem. Specifically, we explore a parametric family of loss functions evaluating the mutual information between the features and the labels, and find automatically the one that maximizes the predictive performances. Furthermore, we introduce the Elbow Maximum Centroid-Shift (EMaCS) technique, which estimates the number of classes in the unlabeled set. We report comprehensive experiments, which show that our mutual information-based approach (MIB) is both versatile and highly competitive under various GCD scenarios. The gap between the proposed approach and the existing methods is significant, more so when dealing with fine-grained classification problems. Our code: https://github.com/fchiaroni/Mutual-Information-Based-GCD

    Mutual information of generalized free fields

    Get PDF
    We study generalized free fields (GFF) from the point of view of information measures. We first review conformal GFF, their holographic representation, and the ambiguities in the assignation of algebras to regions that arise in these theories. Then we study the mutual information (MI) in several geometric configurations. The MI displays unusual features at the short distance limit: a leading volume term rather than an area term, and a logarithmic term in any dimensions rather than only for even dimensions as in ordinary conformal field theory's. We find the dependence of some subleading terms on the conformal dimension Δ of the GFF. We study the long distance limit of the MI for regions with boundary in the null cone. The pinching limit of these surfaces show the GFF behaves as an interacting model from the MI point of view. The pinching exponents depend on the choice of algebra. The entanglement wedge algebra choice allows these models to "fake"causality, giving results consistent with its role in the description of large N models.Fil: Benedetti, Valentin. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Comisión Nacional de Energía Atómica. Centro Atómico Bariloche; ArgentinaFil: Casini, Horacio German. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Comisión Nacional de Energía Atómica. Centro Atómico Bariloche; ArgentinaFil: Martinez, Pedro Jorge. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Comisión Nacional de Energía Atómica. Centro Atómico Bariloche; Argentin

    Information Theoretic Proofs of Entropy Power Inequalities

    Full text link
    While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon's entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum mean-square error (MMSE), which are derived from de Bruijn's identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariance-preserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman's Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai and Verd\'u used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder's generalized EPI for linear transformations of the random variables, Takano and Johnson's EPI for dependent variables, Liu and Viswanath's covariance-constrained EPI, and Costa's concavity inequality for the entropy power.Comment: submitted for publication in the IEEE Transactions on Information Theory, revised versio

    Generalized Jarzynski Equality under Nonequilibrium Feedback Control

    Full text link
    The Jarzynski equality is generalized to situations in which nonequilibrium systems are subject to a feedback control. The new terms that arise as a consequence of the feedback describe the mutual information content obtained by measurement and the efficacy of the feedback control. Our results lead to a generalized fluctuation-dissipation theorem that reflects the readout information, and can be experimentally tested using small thermodynamic systems. We illustrate our general results by an introducing "information ratchet," which can transport a Brownian particle in one direction and extract a positive work from the particle
    corecore