3 research outputs found
Probabilistic inference of binary Markov random fields in spiking neural networks through mean-field approximation
Recent studies have suggested that the cognitive process of the human brain is realized as probabilistic inference and can be further modeled by probabilistic graphical models like Markov random fields. Nevertheless, it remains unclear how probabilistic inference can be implemented by a network of spiking neurons in the brain. Previous studies have tried to relate the inference equation of binary Markov random fields to the dynamic equation of spiking neural networks through belief propagation algorithm and reparameterization, but they are valid only for Markov random fields with limited network structure. In this paper, we propose a spiking neural network model that can implement inference of arbitrary binary Markov random fields. Specifically, we design a spiking recurrent neural network and prove that its neuronal dynamics are mathematically equivalent to the inference process of Markov random fields by adopting mean-field theory. Furthermore, our mean-field approach unifies previous works. Theoretical analysis and experimental results, together with the application to image denoising, demonstrate that our proposed spiking neural network can get comparable results to that of mean-field inference
A Spiking Neural Network Learning Markov Chain
In this paper, the question how spiking neural network (SNN) learns and fixes
in its internal structures a model of external world dynamics is explored. This
question is important for implementation of the model-based reinforcement
learning (RL), the realistic RL regime where the decisions made by SNN and
their evaluation in terms of reward/punishment signals may be separated by
significant time interval and sequence of intermediate evaluation-neutral world
states. In the present work, I formalize world dynamics as a Markov chain with
unknown a priori state transition probabilities, which should be learnt by the
network. To make this problem formulation more realistic, I solve it in
continuous time, so that duration of every state in the Markov chain may be
different and is unknown. It is demonstrated how this task can be accomplished
by an SNN with specially designed structure and local synaptic plasticity
rules. As an example, we show how this network motif works in the simple but
non-trivial world where a ball moves inside a square box and bounces from its
walls with a random new direction and velocity
Probabilistic inference of binary Markov random fields in spiking neural networks through mean-field approximation
Recent studies have suggested that the cognitive process of the human brain is realized as probabilistic inference and can be further modeled by probabilistic graphical models like Markov random fields. Nevertheless, it remains unclear how probabilistic inference can be implemented by a network of spiking neurons in the brain. Previous studies have tried to relate the inference equation of binary Markov random fields to the dynamic equation of spiking neural networks through belief propagation algorithm and reparameterization, but they are valid only for Markov random fields with limited network structure. In this paper, we propose a spiking neural network model that can implement inference of arbitrary binary Markov random fields. Specifically, we design a spiking recurrent neural network and prove that its neuronal dynamics are mathematically equivalent to the inference process of Markov random fields by adopting mean-field theory. Furthermore, our mean-field approach unifies previous works. Theoretical analysis and experimental results, together with the application to image denoising, demonstrate that our proposed spiking neural network can get comparable results to that of mean-field inference