58,679 research outputs found

    Abstraction in decision-makers with limited information processing capabilities

    Full text link
    A distinctive property of human and animal intelligence is the ability to form abstractions by neglecting irrelevant information which allows to separate structure from noise. From an information theoretic point of view abstractions are desirable because they allow for very efficient information processing. In artificial systems abstractions are often implemented through computationally costly formations of groups or clusters. In this work we establish the relation between the free-energy framework for decision making and rate-distortion theory and demonstrate how the application of rate-distortion for decision-making leads to the emergence of abstractions. We argue that abstractions are induced due to a limit in information processing capacity.Comment: Presented at the NIPS 2013 Workshop on Planning with Information Constraint

    Message passing algorithms for non-linear nodes and data compression

    Full text link
    The use of parity-check gates in information theory has proved to be very efficient. In particular, error correcting codes based on parity checks over low-density graphs show excellent performances. Another basic issue of information theory, namely data compression, can be addressed in a similar way by a kind of dual approach. The theoretical performance of such a Parity Source Coder can attain the optimal limit predicted by the general rate-distortion theory. However, in order to turn this approach into an efficient compression code (with fast encoding/decoding algorithms) one must depart from parity checks and use some general random gates. By taking advantage of analytical approaches from the statistical physics of disordered systems and SP-like message passing algorithms, we construct a compressor based on low-density non-linear gates with a very good theoretical and practical performance.Comment: 13 pages, European Conference on Complex Systems, Paris (Nov 2005

    Minimum Cost Distributed Source Coding Over a Network

    Get PDF
    This paper considers the problem of transmitting multiple compressible sources over a network at minimum cost. The aim is to find the optimal rates at which the sources should be compressed and the network flows using which they should be transmitted so that the cost of the transmission is minimal. We consider networks with capacity constraints and linear cost functions. The problem is complicated by the fact that the description of the feasible rate region of distributed source coding problems typically has a number of constraints that is exponential in the number of sources. This renders general purpose solvers inefficient. We present a framework in which these problems can be solved efficiently by exploiting the structure of the feasible rate regions coupled with dual decomposition and optimization techniques such as the subgradient method and the proximal bundle method

    Computing the channel capacity of a communication system affected by uncertain transition probabilities

    Full text link
    We study the problem of computing the capacity of a discrete memoryless channel under uncertainty affecting the channel law matrix, and possibly with a constraint on the average cost of the input distribution. The problem has been formulated in the literature as a max-min problem. We use the robust optimization methodology to convert the max-min problem to a standard convex optimization problem. For small-sized problems, and for many types of uncertainty, such a problem can be solved in principle using interior point methods (IPM). However, for large-scale problems, IPM are not practical. Here, we suggest an O(1/T)\mathcal{O}(1/T) first-order algorithm based on Nemirovski (2004) which is applied directly to the max-min problem.Comment: 22 pages, 2 figure

    Approaching the Rate-Distortion Limit with Spatial Coupling, Belief propagation and Decimation

    Get PDF
    We investigate an encoding scheme for lossy compression of a binary symmetric source based on simple spatially coupled Low-Density Generator-Matrix codes. The degree of the check nodes is regular and the one of code-bits is Poisson distributed with an average depending on the compression rate. The performance of a low complexity Belief Propagation Guided Decimation algorithm is excellent. The algorithmic rate-distortion curve approaches the optimal curve of the ensemble as the width of the coupling window grows. Moreover, as the check degree grows both curves approach the ultimate Shannon rate-distortion limit. The Belief Propagation Guided Decimation encoder is based on the posterior measure of a binary symmetric test-channel. This measure can be interpreted as a random Gibbs measure at a "temperature" directly related to the "noise level of the test-channel". We investigate the links between the algorithmic performance of the Belief Propagation Guided Decimation encoder and the phase diagram of this Gibbs measure. The phase diagram is investigated thanks to the cavity method of spin glass theory which predicts a number of phase transition thresholds. In particular the dynamical and condensation "phase transition temperatures" (equivalently test-channel noise thresholds) are computed. We observe that: (i) the dynamical temperature of the spatially coupled construction saturates towards the condensation temperature; (ii) for large degrees the condensation temperature approaches the temperature (i.e. noise level) related to the information theoretic Shannon test-channel noise parameter of rate-distortion theory. This provides heuristic insight into the excellent performance of the Belief Propagation Guided Decimation algorithm. The paper contains an introduction to the cavity method
    • …
    corecore