519 research outputs found
A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons
Intrinsically-generated fluctuating activity in excitatory-inhibitory networks
Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
Over the past decade, deep neural networks (DNNs) have demonstrated
remarkable performance in a variety of applications. As we try to solve more
advanced problems, increasing demands for computing and power resources has
become inevitable. Spiking neural networks (SNNs) have attracted widespread
interest as the third-generation of neural networks due to their event-driven
and low-powered nature. SNNs, however, are difficult to train, mainly owing to
their complex dynamics of neurons and non-differentiable spike operations.
Furthermore, their applications have been limited to relatively simple tasks
such as image classification. In this study, we investigate the performance
degradation of SNNs in a more challenging regression problem (i.e., object
detection). Through our in-depth analysis, we introduce two novel methods:
channel-wise normalization and signed neuron with imbalanced threshold, both of
which provide fast and accurate information transmission for deep SNNs.
Consequently, we present a first spiked-based object detection model, called
Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable
results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial
datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic
chip consumes approximately 280 times less energy than Tiny YOLO and converges
2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
Rhythmic inhibition allows neural networks to search for maximally consistent states
Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits
yet its computational role still remains elusive. We show that a model of
Gamma-band rhythmic inhibition allows networks of coupled cortical circuit
motifs to search for network configurations that best reconcile external inputs
with an internal consistency model encoded in the network connectivity. We show
that Hebbian plasticity allows the networks to learn the consistency model by
example. The search dynamics driven by rhythmic inhibition enable the described
networks to solve difficult constraint satisfaction problems without making
assumptions about the form of stochastic fluctuations in the network. We show
that the search dynamics are well approximated by a stochastic sampling
process. We use the described networks to reproduce perceptual multi-stability
phenomena with switching times that are a good match to experimental data and
show that they provide a general neural framework which can be used to model
other 'perceptual inference' phenomena
Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks
Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN
Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks
Biological neurons communicate with a sparing exchange of pulses - spikes. It
is an open question how real spiking neurons produce the kind of powerful
neural computation that is possible with deep artificial neural networks, using
only so very few spikes to communicate. Building on recent insights in
neuroscience, we present an Adapting Spiking Neural Network (ASNN) based on
adaptive spiking neurons. These spiking neurons efficiently encode information
in spike-trains using a form of Asynchronous Pulsed Sigma-Delta coding while
homeostatically optimizing their firing rate. In the proposed paradigm of
spiking neuron computation, neural adaptation is tightly coupled to synaptic
plasticity, to ensure that downstream neurons can correctly decode upstream
spiking neurons. We show that this type of network is inherently able to carry
out asynchronous and event-driven neural computation, while performing
identical to corresponding artificial neural networks (ANNs). In particular, we
show that these adaptive spiking neurons can be drop in replacements for ReLU
neurons in standard feedforward ANNs comprised of such units. We demonstrate
that this can also be successfully applied to a ReLU based deep convolutional
neural network for classifying the MNIST dataset. The ASNN thus outperforms
current Spiking Neural Networks (SNNs) implementations, while responding (up
to) an order of magnitude faster and using an order of magnitude fewer spikes.
Additionally, in a streaming setting where frames are continuously classified,
we show that the ASNN requires substantially fewer network updates as compared
to the corresponding ANN
Optimizing the energy consumption of spiking neural networks for neuromorphic applications
In the last few years, spiking neural networks have been demonstrated to
perform on par with regular convolutional neural networks. Several works have
proposed methods to convert a pre-trained CNN to a Spiking CNN without a
significant sacrifice of performance. We demonstrate first that
quantization-aware training of CNNs leads to better accuracy in SNNs. One of
the benefits of converting CNNs to spiking CNNs is to leverage the sparse
computation of SNNs and consequently perform equivalent computation at a lower
energy consumption. Here we propose an efficient optimization strategy to train
spiking networks at lower energy consumption, while maintaining similar
accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets
- …