129,333 research outputs found
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Adaptive self-organization in a realistic neural network model
Information processing in complex systems is often found to be maximally
efficient close to critical states associated with phase transitions. It is
therefore conceivable that also neural information processing operates close to
criticality. This is further supported by the observation of power-law
distributions, which are a hallmark of phase transitions. An important open
question is how neural networks could remain close to a critical point while
undergoing a continual change in the course of development, adaptation,
learning, and more. An influential contribution was made by Bornholdt and
Rohlf, introducing a generic mechanism of robust self-organized criticality in
adaptive networks. Here, we address the question whether this mechanism is
relevant for real neural networks. We show in a realistic model that
spike-time-dependent synaptic plasticity can self-organize neural networks
robustly toward criticality. Our model reproduces several empirical
observations and makes testable predictions on the distribution of synaptic
strength, relating them to the critical state of the network. These results
suggest that the interplay between dynamics and topology may be essential for
neural information processing.Comment: 6 pages, 4 figure
Echo State Queueing Network: a new reservoir computing learning tool
In the last decade, a new computational paradigm was introduced in the field
of Machine Learning, under the name of Reservoir Computing (RC). RC models are
neural networks which a recurrent part (the reservoir) that does not
participate in the learning process, and the rest of the system where no
recurrence (no neural circuit) occurs. This approach has grown rapidly due to
its success in solving learning tasks and other computational applications.
Some success was also observed with another recently proposed neural network
designed using Queueing Theory, the Random Neural Network (RandNN). Both
approaches have good properties and identified drawbacks. In this paper, we
propose a new RC model called Echo State Queueing Network (ESQN), where we use
ideas coming from RandNNs for the design of the reservoir. ESQNs consist in
ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The
paper positions ESQNs in the global Machine Learning area, and provides
examples of their use and performances. We show on largely used benchmarks that
ESQNs are very accurate tools, and we illustrate how they compare with standard
ESNs.Comment: Proceedings of the 10th IEEE Consumer Communications and Networking
Conference (CCNC), Las Vegas, USA, 201
- …