219 research outputs found

    Stochastic Interpretation for the Arimoto Algorithm

    Full text link
    The Arimoto algorithm computes the Gallager function maxQE0(ρ,Q)\max_Q {E}_{0}^{}(\rho,Q) for a given channel P(yx){P}_{}^{}(y \,|\, x) and parameter ρ\rho, by means of alternating maximization. Along the way, it generates a sequence of input distributions Q1(x){Q}_{1}^{}(x), Q2(x){Q}_{2}^{}(x), ... , that converges to the maximizing input Q(x){Q}_{}^{*}(x). We propose a stochastic interpretation for the Arimoto algorithm. We show that for a random (i.i.d.) codebook with a distribution Qk(x){Q}_{k}^{}(x), the next distribution Qk+1(x){Q}_{k+1}^{}(x) in the Arimoto algorithm is equal to the type (Q{Q}') of the feasible transmitted codeword that maximizes the conditional Gallager exponent (conditioned on a specific transmitted codeword type Q{Q}'). This interpretation is a first step toward finding a stochastic mechanism for on-line channel input adaptation.Comment: 5 pages, 1 figure, accepted for 2015 IEEE Information Theory Workshop, Jerusalem, Israe

    Capacity of DNA Data Embedding Under Substitution Mutations

    Full text link
    A number of methods have been proposed over the last decade for encoding information using deoxyribonucleic acid (DNA), giving rise to the emerging area of DNA data embedding. Since a DNA sequence is conceptually equivalent to a sequence of quaternary symbols (bases), DNA data embedding (diversely called DNA watermarking or DNA steganography) can be seen as a digital communications problem where channel errors are tantamount to mutations of DNA bases. Depending on the use of coding or noncoding DNA hosts, which, respectively, denote DNA segments that can or cannot be translated into proteins, DNA data embedding is essentially a problem of communications with or without side information at the encoder. In this paper the Shannon capacity of DNA data embedding is obtained for the case in which DNA sequences are subject to substitution mutations modelled using the Kimura model from molecular evolution studies. Inferences are also drawn with respect to the biological implications of some of the results presented.Comment: 22 pages, 13 figures; preliminary versions of this work were presented at the SPIE Media Forensics and Security XII conference (January 2010) and at the IEEE ICASSP conference (March 2010

    On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach

    Full text link
    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on using multiple trials of a simple RS decoding algorithm in combination with erasing or flipping a set of symbols or bits in each trial. This paper presents a framework based on rate-distortion (RD) theory to analyze these multiple-decoding algorithms. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the successful decoding condition, for a single errors-and-erasures decoding trial, becomes equivalent to distortion being less than a fixed threshold. Finding the best set of erasure patterns also turns into a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. This initial result is also extended a few directions. The rate-distortion exponent (RDE) is computed to give more precise results for moderate blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are analyzed using this framework. Analytical and numerical computations of the RD and RDE functions are also presented. Finally, simulation results show that sets of erasure patterns designed using the proposed methods outperform other algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks

    Geometrical interpretation and improvements of the Blahut-Arimoto's algorithm

    No full text
    International audienceThe paper first recalls the Blahut Arimoto algorithm for computing the capacity of arbitrary discrete memoryless channels, as an example of an iterative algorithm working with probability density estimates. Then, a geometrical interpretation of this algorithm based on projections onto linear and exponential families of probabilities is provided. Finally, this understanding allows also to propose to write the Blahut-Arimoto algorithm, as a true proximal point algorithm. it is shown that the corresponding version has an improved convergence rate, compared to the initial algorithm, as well as in comparison with other improved versions
    corecore