54,066 research outputs found
Closed-form control with spike coding networks
Efficient and robust control using spiking neural networks (SNNs) is still an
open problem. Whilst behaviour of biological agents is produced through sparse
and irregular spiking patterns, which provide both robust and efficient
control, the activity patterns in most artificial spiking neural networks used
for control are dense and regular -- resulting in potentially less efficient
codes. Additionally, for most existing control solutions network training or
optimization is necessary, even for fully identified systems, complicating
their implementation in on-chip low-power solutions. The neuroscience theory of
Spike Coding Networks (SCNs) offers a fully analytical solution for
implementing dynamical systems in recurrent spiking neural networks -- while
maintaining irregular, sparse, and robust spiking activity -- but it's not
clear how to directly apply it to control problems. Here, we extend SCN theory
by incorporating closed-form optimal estimation and control. The resulting
networks work as a spiking equivalent of a linear-quadratic-Gaussian
controller. We demonstrate robust spiking control of simulated
spring-mass-damper and cart-pole systems, in the face of several perturbations,
including input- and system-noise, system disturbances, and neural silencing.
As our approach does not need learning or optimization, it offers opportunities
for deploying fast and efficient task-specific on-chip spiking controllers with
biologically realistic activity.Comment: Under review in an IEEE journa
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Recently, a race towards the simplification of deep networks has begun,
showing that it is effectively possible to reduce the size of these models with
minimal or no performance loss. However, there is a general lack in
understanding why these pruning strategies are effective. In this work, we are
going to compare and analyze pruned solutions with two different pruning
approaches, one-shot and gradual, showing the higher effectiveness of the
latter. In particular, we find that gradual pruning allows access to narrow,
well-generalizing minima, which are typically ignored when using one-shot
approaches. In this work we also propose PSP-entropy, a measure to understand
how a given neuron correlates to some specific learned classes. Interestingly,
we observe that the features extracted by iteratively-pruned models are less
correlated to specific classes, potentially making these models a better fit in
transfer learning approaches
- …