1,644 research outputs found
Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses
We investigate the effect of electric synapses (gap junctions) on collective
neuronal dynamics and spike statistics in a conductance-based
Integrate-and-Fire neural network, driven by a Brownian noise, where
conductances depend upon spike history. We compute explicitly the time
evolution operator and show that, given the spike-history of the network and
the membrane potentials at a given time, the further dynamical evolution can be
written in a closed form. We show that spike train statistics is described by a
Gibbs distribution whose potential can be approximated with an explicit
formula, when the noise is weak. This potential form encompasses existing
models for spike trains statistics analysis such as maximum entropy models or
Generalized Linear Models (GLM). We also discuss the different types of
correlations: those induced by a shared stimulus and those induced by neurons
interactions.Comment: 42 pages, 1 figure, submitte
Statistics of spike trains in conductance-based neural networks: Rigorous results
We consider a conductance based neural network inspired by the generalized
Integrate and Fire model introduced by Rudolph and Destexhe. We show the
existence and uniqueness of a unique Gibbs distribution characterizing spike
train statistics. The corresponding Gibbs potential is explicitly computed.
These results hold in presence of a time-dependent stimulus and apply therefore
to non-stationary dynamics.Comment: 42 pages, 1 figure, to appear in Journal of Mathematical Neuroscienc
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
Phase locking below rate threshold in noisy model neurons
The property of a neuron to phase-lock to an oscillatory stimulus before adapting its spike rate to the stimulus frequency plays an important role for the auditory system. We investigate under which conditions neurons exhibit this phase locking below rate threshold. To this end, we simulate neurons employing the widely used leaky integrate-and-fire (LIF) model. Tuning parameters, we can arrange either an irregular spontaneous or a tonic spiking mode. When the neuron is stimulated in both modes, a significant rise of vector strength prior to a noticeable change of the spike rate can be observed. Combining analytic reasoning with numerical simulations, we trace this observation back to a modulation of interspike intervals, which itself requires spikes to be only loosely coupled. We test the limits of this conception by simulating an LIF model with threshold fatigue, which generates pronounced anticorrelations between subsequent interspike intervals. In addition we evaluate the LIF response for harmonic stimuli of various frequencies and discuss the extension to more complex stimuli. It seems that phase locking below rate threshold occurs generically for all zero mean stimuli. Finally, we discuss our findings in the context of stimulus detection
Stochastic Spin-Orbit Torque Devices as Elements for Bayesian Inference
Probabilistic inference from real-time input data is becoming increasingly
popular and may be one of the potential pathways at enabling cognitive
intelligence. As a matter of fact, preliminary research has revealed that
stochastic functionalities also underlie the spiking behavior of neurons in
cortical microcircuits of the human brain. In tune with such observations,
neuromorphic and other unconventional computing platforms have recently started
adopting the usage of computational units that generate outputs
probabilistically, depending on the magnitude of the input stimulus. In this
work, we experimentally demonstrate a spintronic device that offers a direct
mapping to the functionality of such a controllable stochastic switching
element. We show that the probabilistic switching of Ta/CoFeB/MgO
heterostructures in presence of spin-orbit torque and thermal noise can be
harnessed to enable probabilistic inference in a plethora of unconventional
computing scenarios. This work can potentially pave the way for hardware that
directly mimics the computational units of Bayesian inference
- …