31,694 research outputs found

    Generalizations of Fano's Inequality for Conditional Information Measures via Majorization Theory

    Full text link
    Fano's inequality is one of the most elementary, ubiquitous, and important tools in information theory. Using majorization theory, Fano's inequality is generalized to a broad class of information measures, which contains those of Shannon and R\'{e}nyi. When specialized to these measures, it recovers and generalizes the classical inequalities. Key to the derivation is the construction of an appropriate conditional distribution inducing a desired marginal distribution on a countably infinite alphabet. The construction is based on the infinite-dimensional version of Birkhoff's theorem proven by R\'{e}v\'{e}sz [Acta Math. Hungar. 1962, 3, 188{\textendash}198], and the constraint of maintaining a desired marginal distribution is similar to coupling in probability theory. Using our Fano-type inequalities for Shannon's and R\'{e}nyi's information measures, we also investigate the asymptotic behavior of the sequence of Shannon's and R\'{e}nyi's equivocations when the error probabilities vanish. This asymptotic behavior provides a novel characterization of the asymptotic equipartition property (AEP) via Fano's inequality.Comment: 44 pages, 3 figure

    A Rank-Metric Approach to Error Control in Random Network Coding

    Full text link
    The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of K\"otter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if μ\mu erasures and δ\delta deviations occur, then errors of rank tt can always be corrected provided that 2td1+μ+δ2t \leq d - 1 + \mu + \delta, where dd is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where nn packets of length MM over FqF_q are transmitted, the complexity of the decoding algorithm is given by O(dM)O(dM) operations in an extension field FqnF_{q^n}.Comment: Minor corrections; 42 pages, to be published at the IEEE Transactions on Information Theor

    Noise adaptive training for subspace Gaussian mixture models

    Get PDF
    Noise adaptive training (NAT) is an effective approach to normalise the environmental distortions in the training data. This paper investigates the model-based NAT scheme using joint uncertainty decoding (JUD) for subspace Gaussian mixture models (SGMMs). A typical SGMM acoustic model has much larger number of surface Gaussian components, which makes it computationally infeasible to compensate each Gaussian explicitly. JUD tackles the problem by sharing the compensation parameters among the Gaussians and hence reduces the computational and memory demands. For noise adaptive training, JUD is reformulated into a generative model, which leads to an efficient expectation-maximisation (EM) based algorithm to update the SGMM acoustic model parameters. We evaluated the SGMMs with NAT on the Aurora 4 database, and obtained higher recognition accuracy compared to systems without adaptive training. Index Terms: adaptive training, noise robustness, joint uncertainty decoding, subspace Gaussian mixture model

    A hypothesis testing approach for communication over entanglement assisted compound quantum channel

    Full text link
    We study the problem of communication over a compound quantum channel in the presence of entanglement. Classically such channels are modeled as a collection of conditional probability distributions wherein neither the sender nor the receiver is aware of the channel being used for transmission, except for the fact that it belongs to this collection. We provide near optimal achievability and converse bounds for this problem in the one-shot quantum setting in terms of quantum hypothesis testing divergence. We also consider the case of informed sender, showing a one-shot achievability result that converges appropriately in the asymptotic and i.i.d. setting. Our achievability proof is similar in spirit to its classical counterpart. To arrive at our result, we use the technique of position-based decoding along with a new approach for constructing a union of two projectors, which can be of independent interest. We give another application of the union of projectors to the problem of testing composite quantum hypotheses.Comment: 21 pages, version 3. Added an application to the composite quantum hypothesis testing. Expanded introductio
    corecore