38,603 research outputs found

    A Relation Between Network Computation and Functional Index Coding Problems

    Full text link
    In contrast to the network coding problem wherein the sinks in a network demand subsets of the source messages, in a network computation problem the sinks demand functions of the source messages. Similarly, in the functional index coding problem, the side information and demands of the clients include disjoint sets of functions of the information messages held by the transmitter instead of disjoint subsets of the messages, as is the case in the conventional index coding problem. It is known that any network coding problem can be transformed into an index coding problem and vice versa. In this work, we establish a similar relationship between network computation problems and a class of functional index coding problems, viz., those in which only the demands of the clients include functions of messages. We show that any network computation problem can be converted into a functional index coding problem wherein some clients demand functions of messages and vice versa. We prove that a solution for a network computation problem exists if and only if a functional index code (of a specific length determined by the network computation problem) for a suitably constructed functional index coding problem exists. And, that a functional index coding problem admits a solution of a specified length if and only if a suitably constructed network computation problem admits a solution.Comment: 3 figures, 7 tables and 9 page

    Bits from Biology for Computational Intelligence

    Get PDF
    Computational intelligence is broadly defined as biologically-inspired computing. Usually, inspiration is drawn from neural systems. This article shows how to analyze neural systems using information theory to obtain constraints that help identify the algorithms run by such systems and the information they represent. Algorithms and representations identified information-theoretically may then guide the design of biologically inspired computing systems (BICS). The material covered includes the necessary introduction to information theory and the estimation of information theoretic quantities from neural data. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely, or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is decomposed into component processes of information storage, transfer, and modification -- locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems

    Feature detection using spikes: the greedy approach

    Full text link
    A goal of low-level neural processes is to build an efficient code extracting the relevant information from the sensory input. It is believed that this is implemented in cortical areas by elementary inferential computations dynamically extracting the most likely parameters corresponding to the sensory signal. We explore here a neuro-mimetic feed-forward model of the primary visual area (VI) solving this problem in the case where the signal may be described by a robust linear generative model. This model uses an over-complete dictionary of primitives which provides a distributed probabilistic representation of input features. Relying on an efficiency criterion, we derive an algorithm as an approximate solution which uses incremental greedy inference processes. This algorithm is similar to 'Matching Pursuit' and mimics the parallel architecture of neural computations. We propose here a simple implementation using a network of spiking integrate-and-fire neurons which communicate using lateral interactions. Numerical simulations show that this Sparse Spike Coding strategy provides an efficient model for representing visual data from a set of natural images. Even though it is simplistic, this transformation of spatial data into a spatio-temporal pattern of binary events provides an accurate description of some complex neural patterns observed in the spiking activity of biological neural networks.Comment: This work links Matching Pursuit with bayesian inference by providing the underlying hypotheses (linear model, uniform prior, gaussian noise model). A parallel with the parallel and event-based nature of neural computations is explored and we show application to modelling Primary Visual Cortex / image processsing. http://incm.cnrs-mrs.fr/perrinet/dynn/LaurentPerrinet/Publications/Perrinet04tau

    Network Coding for Computing: Cut-Set Bounds

    Full text link
    The following \textit{network computing} problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function ff of the messages. The objective is to maximize the average number of times ff can be computed per network usage, i.e., the ``computing capacity''. The \textit{network coding} problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network \textit{min-cut} upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks and for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound.Comment: Submitted to the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks); Revised on Aug 9, 201
    corecore