2 research outputs found

    Learning to be attractive: probabilistic computation with dynamic attractor networks

    Get PDF
    International audienceIn the context of sensory or higher-level cognitive processing, we present a recurrent neural network model, similar to the popular dynamic neural field (DNF) model, for performing approximate probabilistic computations. The model is biologically plausible, avoids impractical schemes such as log-encoding and noise assumptions, and is well-suited for working in stacked hierarchies. By Lyapunov analysis, we make it very plausible that the model computes the maximum a posteriori (MAP) estimate given a certain input that may be corrupted by noise. Key points of the model are its capability to learn the required posterior distributions and represent them in its lateral weights, the interpretation of stable neural activities as MAP estimates, and of latency as the probability associated with those estimates. We demonstrate for in simple experiments that learning of posterior distributions is feasible and results in correct MAP estimates. Furthermore, a pre-activation of field sites can modify attractor states when the data model is ambiguous, effectively providing an approximate implementation of Bayesian inference

    Latency-Based Probabilistic Information Processing in Recurrent Neural Hierarchies

    Get PDF
    International audienceIn this article, we present an original neural space/latency code, integrated in a multi-layered neural hierarchy, that offers a new perspective on probabilistic inference operations. Our work is based on the dynamic neural field paradigm that leads to the emergence of activity bumps, based on recurrent lateral interactions, thus providing a spatial coding of information. We propose that lateral connections represent a data model, i.e., the conditional probability of a "true" stimulus given a noisy input. We propose furthermore that the resulting attractor state encodes the most likely "true" stimulus given the data model, and that its latency expresses the confidence in this interpretation. Thus, the main feature of this network is its ability to represent, transmit and integrate probabilistic information at multiple levels so that to take near-optimal decisions when inputs are contradictory, noisy or missing. We illustrate these properties on a three-layered neural hierarchy receiving inputs from a simplified robotic object recognition task. We also compare the network dynamics to an explicit probabilistic model of the task, to verify that it indeed reproduces all relevant properties of probabilistic processing
    corecore