13,416 research outputs found
Decorrelation of neural-network activity by inhibitory feedback
Correlations in spike-train ensembles can seriously impair the encoding of
information by their spatio-temporal structure. An inevitable source of
correlation in finite neural networks is common presynaptic input to pairs of
neurons. Recent theoretical and experimental studies demonstrate that spike
correlations in recurrent neural networks are considerably smaller than
expected based on the amount of shared presynaptic input. By means of a linear
network model and simulations of networks of leaky integrate-and-fire neurons,
we show that shared-input correlations are efficiently suppressed by inhibitory
feedback. To elucidate the effect of feedback, we compare the responses of the
intact recurrent network and systems where the statistics of the feedback
channel is perturbed. The suppression of spike-train correlations and
population-rate fluctuations by inhibitory feedback can be observed both in
purely inhibitory and in excitatory-inhibitory networks. The effect is fully
understood by a linear theory and becomes already apparent at the macroscopic
level of the population averaged activity. At the microscopic level,
shared-input correlations are suppressed by spike-train correlations: In purely
inhibitory networks, they are canceled by negative spike-train correlations. In
excitatory-inhibitory networks, spike-train correlations are typically
positive. Here, the suppression of input correlations is not a result of the
mere existence of correlations between excitatory (E) and inhibitory (I)
neurons, but a consequence of a particular structure of correlations among the
three possible pairings (EE, EI, II)
Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons
Echo state networks (ESN), a type of reservoir computing (RC) architecture,
are efficient and accurate artificial neural systems for time series processing
and learning. An ESN consists of a core of recurrent neural networks, called a
reservoir, with a small number of tunable parameters to generate a
high-dimensional representation of an input, and a readout layer which is
easily trained using regression to produce a desired output from the reservoir
states. Certain computational tasks involve real-time calculation of high-order
time correlations, which requires nonlinear transformation either in the
reservoir or the readout layer. Traditional ESN employs a reservoir with
sigmoid or tanh function neurons. In contrast, some types of biological neurons
obey response curves that can be described as a product unit rather than a sum
and threshold. Inspired by this class of neurons, we introduce a RC
architecture with a reservoir of product nodes for time series computation. We
find that the product RC shows many properties of standard ESN such as
short-term memory and nonlinear capacity. On standard benchmarks for chaotic
prediction tasks, the product RC maintains the performance of a standard
nonlinear ESN while being more amenable to mathematical analysis. Our study
provides evidence that such networks are powerful in highly nonlinear tasks
owing to high-order statistics generated by the recurrent product node
reservoir
On the Inability of Markov Models to Capture Criticality in Human Mobility
We examine the non-Markovian nature of human mobility by exposing the
inability of Markov models to capture criticality in human mobility. In
particular, the assumed Markovian nature of mobility was used to establish a
theoretical upper bound on the predictability of human mobility (expressed as a
minimum error probability limit), based on temporally correlated entropy. Since
its inception, this bound has been widely used and empirically validated using
Markov chains. We show that recurrent-neural architectures can achieve
significantly higher predictability, surpassing this widely used upper bound.
In order to explain this anomaly, we shed light on several underlying
assumptions in previous research works that has resulted in this bias. By
evaluating the mobility predictability on real-world datasets, we show that
human mobility exhibits scale-invariant long-range correlations, bearing
similarity to a power-law decay. This is in contrast to the initial assumption
that human mobility follows an exponential decay. This assumption of
exponential decay coupled with Lempel-Ziv compression in computing Fano's
inequality has led to an inaccurate estimation of the predictability upper
bound. We show that this approach inflates the entropy, consequently lowering
the upper bound on human mobility predictability. We finally highlight that
this approach tends to overlook long-range correlations in human mobility. This
explains why recurrent-neural architectures that are designed to handle
long-range structural correlations surpass the previously computed upper bound
on mobility predictability
A unified view on weakly correlated recurrent networks
The diversity of neuron models used in contemporary theoretical neuroscience
to investigate specific properties of covariances raises the question how these
models relate to each other. In particular it is hard to distinguish between
generic properties and peculiarities due to the abstracted model. Here we
present a unified view on pairwise covariances in recurrent networks in the
irregular regime. We consider the binary neuron model, the leaky
integrate-and-fire model, and the Hawkes process. We show that linear
approximation maps each of these models to either of two classes of linear rate
models, including the Ornstein-Uhlenbeck process as a special case. The classes
differ in the location of additive noise in the rate dynamics, which is on the
output side for spiking models and on the input side for the binary model. Both
classes allow closed form solutions for the covariance. For output noise it
separates into an echo term and a term due to correlated input. The unified
framework enables us to transfer results between models. For example, we
generalize the binary model and the Hawkes process to the presence of
conduction delays and simplify derivations for established results. Our
approach is applicable to general network structures and suitable for
population averages. The derived averages are exact for fixed out-degree
network architectures and approximate for fixed in-degree. We demonstrate how
taking into account fluctuations in the linearization procedure increases the
accuracy of the effective theory and we explain the class dependent differences
between covariances in the time and the frequency domain. Finally we show that
the oscillatory instability emerging in networks of integrate-and-fire models
with delayed inhibitory feedback is a model-invariant feature: the same
structure of poles in the complex frequency plane determines the population
power spectra
NeuTM: A Neural Network-based Framework for Traffic Matrix Prediction in SDN
This paper presents NeuTM, a framework for network Traffic Matrix (TM)
prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM
RNNs). TM prediction is defined as the problem of estimating future network
traffic matrix from the previous and achieved network traffic data. It is
widely used in network planning, resource management and network security. Long
Short-Term Memory (LSTM) is a specific recurrent neural network (RNN)
architecture that is well-suited to learn from data and classify or predict
time series with time lags of unknown size. LSTMs have been shown to model
long-range dependencies more accurately than conventional RNNs. NeuTM is a LSTM
RNN-based framework for predicting TM in large networks. By validating our
framework on real-world data from GEEANT network, we show that our model
converges quickly and gives state of the art TM prediction performance.Comment: Submitted to NOMS18. arXiv admin note: substantial text overlap with
arXiv:1705.0569
- …