266,946 research outputs found

    Minimum Description Length codes are critical

    Full text link
    In the Minimum Description Length (MDL) principle, learning from the data is equivalent to an optimal coding problem. We show that the codes that achieve optimal compression in MDL are critical in a very precise sense. First, when they are taken as generative models of samples, they generate samples with broad empirical distributions and with a high value of the relevance, defined as the entropy of the empirical frequencies. These results are derived for different statistical models (Dirichlet model, independent and pairwise dependent spin models, and restricted Boltzmann machines). Second, MDL codes sit precisely at a second order phase transition point where the symmetry between the sampled outcomes is spontaneously broken. The order parameter controlling the phase transition is the coding cost of the samples. The phase transition is a manifestation of the optimality of MDL codes, and it arises because codes that achieve a higher compression do not exist. These results suggest a clear interpretation of the widespread occurrence of statistical criticality as a characterization of samples which are maximally informative on the underlying generative process.Comment: 23 pages, 5 figures; Corrected the author name, revised Section 2.2 (Large Deviations of the Universal Codes Exhibit Phase Transitions), corrected Eq. (89

    Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity

    Get PDF
    The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's minimal sufficient statistic. In general we show that data compression is almost always the best strategy, both in hypothesis identification and prediction.Comment: 35 pages, Latex. Submitted IEEE Trans. Inform. Theor

    LDPC Codes Which Can Correct Three Errors Under Iterative Decoding

    Full text link
    In this paper, we provide necessary and sufficient conditions for a column-weight-three LDPC code to correct three errors when decoded using Gallager A algorithm. We then provide a construction technique which results in a code satisfying the above conditions. We also provide numerical assessment of code performance via simulation results.Comment: 5 pages, 3 figures, submitted to IEEE Information Theory Workshop (ITW), 200

    Mathematical Programming Decoding of Binary Linear Codes: Theory and Algorithms

    Full text link
    Mathematical programming is a branch of applied mathematics and has recently been used to derive new decoding approaches, challenging established but often heuristic algorithms based on iterative message passing. Concepts from mathematical programming used in the context of decoding include linear, integer, and nonlinear programming, network flows, notions of duality as well as matroid and polyhedral theory. This survey article reviews and categorizes decoding methods based on mathematical programming approaches for binary linear codes over binary-input memoryless symmetric channels.Comment: 17 pages, submitted to the IEEE Transactions on Information Theory. Published July 201

    Binary Tree Approach to Scaling in Unimodal Maps

    Full text link
    Ge, Rusjan, and Zweifel (J. Stat. Phys. 59, 1265 (1990)) introduced a binary tree which represents all the periodic windows in the chaotic regime of iterated one-dimensional unimodal maps. We consider the scaling behavior in a modified tree which takes into account the self-similarity of the window structure. A non-universal geometric convergence of the associated superstable parameter values towards a Misiurewicz point is observed for almost all binary sequences with periodic tails. There are an infinite number of exceptional sequences, however, which lead to superexponential scaling. The origin of such sequences is explained.Comment: 25 pages, plain Te
    corecore