325 research outputs found
Deterministic networks for probabilistic computing
Neural-network models of high-level brain functions such as memory recall and
reasoning often rely on the presence of stochasticity. The majority of these
models assumes that each neuron in the functional network is equipped with its
own private source of randomness, often in the form of uncorrelated external
noise. However, both in vivo and in silico, the number of noise sources is
limited due to space and bandwidth constraints. Hence, neurons in large
networks usually need to share noise sources. Here, we show that the resulting
shared-noise correlations can significantly impair the performance of
stochastic network models. We demonstrate that this problem can be overcome by
using deterministic recurrent neural networks as sources of uncorrelated noise,
exploiting the decorrelating effect of inhibitory feedback. Consequently, even
a single recurrent network of a few hundred neurons can serve as a natural
noise source for large ensembles of functional networks, each comprising
thousands of units. We successfully apply the proposed framework to a diverse
set of binary-unit networks with different dimensionalities and entropies, as
well as to a network reproducing handwritten digits with distinct predefined
frequencies. Finally, we show that the same design transfers to functional
networks of spiking neurons.Comment: 22 pages, 11 figure
Spiking neurons with short-term synaptic plasticity form superior generative networks
Spiking networks that perform probabilistic inference have been proposed both
as models of cortical computation and as candidates for solving problems in
machine learning. However, the evidence for spike-based computation being in
any way superior to non-spiking alternatives remains scarce. We propose that
short-term plasticity can provide spiking networks with distinct computational
advantages compared to their classical counterparts. In this work, we use
networks of leaky integrate-and-fire neurons that are trained to perform both
discriminative and generative tasks in their forward and backward information
processing paths, respectively. During training, the energy landscape
associated with their dynamics becomes highly diverse, with deep attractor
basins separated by high barriers. Classical algorithms solve this problem by
employing various tempering techniques, which are both computationally
demanding and require global state updates. We demonstrate how similar results
can be achieved in spiking networks endowed with local short-term synaptic
plasticity. Additionally, we discuss how these networks can even outperform
tempering-based approaches when the training data is imbalanced. We thereby
show how biologically inspired, local, spike-triggered synaptic dynamics based
simply on a limited pool of synaptic resources can allow spiking networks to
outperform their non-spiking relatives.Comment: corrected typo in abstrac
Stochasticity from function -- why the Bayesian brain may need no noise
An increasing body of evidence suggests that the trial-to-trial variability
of spiking activity in the brain is not mere noise, but rather the reflection
of a sampling-based encoding scheme for probabilistic computing. Since the
precise statistical properties of neural activity are important in this
context, many models assume an ad-hoc source of well-behaved, explicit noise,
either on the input or on the output side of single neuron dynamics, most often
assuming an independent Poisson process in either case. However, these
assumptions are somewhat problematic: neighboring neurons tend to share
receptive fields, rendering both their input and their output correlated; at
the same time, neurons are known to behave largely deterministically, as a
function of their membrane potential and conductance. We suggest that spiking
neural networks may, in fact, have no need for noise to perform sampling-based
Bayesian inference. We study analytically the effect of auto- and
cross-correlations in functionally Bayesian spiking networks and demonstrate
how their effect translates to synaptic interaction strengths, rendering them
controllable through synaptic plasticity. This allows even small ensembles of
interconnected deterministic spiking networks to simultaneously and
co-dependently shape their output activity through learning, enabling them to
perform complex Bayesian computation without any need for noise, which we
demonstrate in silico, both in classical simulation and in neuromorphic
emulation. These results close a gap between the abstract models and the
biology of functionally Bayesian spiking networks, effectively reducing the
architectural constraints imposed on physical neural substrates required to
perform probabilistic computing, be they biological or artificial
Symbolic Partial-Order Execution for Testing Multi-Threaded Programs
We describe a technique for systematic testing of multi-threaded programs. We
combine Quasi-Optimal Partial-Order Reduction, a state-of-the-art technique
that tackles path explosion due to interleaving non-determinism, with symbolic
execution to handle data non-determinism. Our technique iteratively and
exhaustively finds all executions of the program. It represents program
executions using partial orders and finds the next execution using an
underlying unfolding semantics. We avoid the exploration of redundant program
traces using cutoff events. We implemented our technique as an extension of
KLEE and evaluated it on a set of large multi-threaded C programs. Our
experiments found several previously undiscovered bugs and undefined behaviors
in memcached and GNU sort, showing that the new method is capable of finding
bugs in industrial-size benchmarks.Comment: Extended version of a paper presented at CAV'2
Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms
Advancing the size and complexity of neural network models leads to an ever
increasing demand for computational resources for their simulation.
Neuromorphic devices offer a number of advantages over conventional computing
architectures, such as high emulation speed or low power consumption, but this
usually comes at the price of reduced configurability and precision. In this
article, we investigate the consequences of several such factors that are
common to neuromorphic devices, more specifically limited hardware resources,
limited parameter configurability and parameter variations. Our final aim is to
provide an array of methods for coping with such inevitable distortion
mechanisms. As a platform for testing our proposed strategies, we use an
executable system specification (ESS) of the BrainScaleS neuromorphic system,
which has been designed as a universal emulation back-end for neuroscientific
modeling. We address the most essential limitations of this device in detail
and study their effects on three prototypical benchmark network models within a
well-defined, systematic workflow. For each network model, we start by defining
quantifiable functionality measures by which we then assess the effects of
typical hardware-specific distortion mechanisms, both in idealized software
simulations and on the ESS. For those effects that cause unacceptable
deviations from the original network dynamics, we suggest generic compensation
mechanisms and demonstrate their effectiveness. Both the suggested workflow and
the investigated compensation mechanisms are largely back-end independent and
do not require additional hardware configurability beyond the one required to
emulate the benchmark networks in the first place. We hereby provide a generic
methodological environment for configurable neuromorphic devices that are
targeted at emulating large-scale, functional neural networks
Demonstrating Advantages of Neuromorphic Computation: A Pilot Study
Neuromorphic devices represent an attempt to mimic aspects of the brain's
architecture and dynamics with the aim of replicating its hallmark functional
capabilities in terms of computational power, robust learning and energy
efficiency. We employ a single-chip prototype of the BrainScaleS 2 neuromorphic
system to implement a proof-of-concept demonstration of reward-modulated
spike-timing-dependent plasticity in a spiking network that learns to play the
Pong video game by smooth pursuit. This system combines an electronic
mixed-signal substrate for emulating neuron and synapse dynamics with an
embedded digital processor for on-chip learning, which in this work also serves
to simulate the virtual environment and learning agent. The analog emulation of
neuronal membrane dynamics enables a 1000-fold acceleration with respect to
biological real-time, with the entire chip operating on a power budget of 57mW.
Compared to an equivalent simulation using state-of-the-art software, the
on-chip emulation is at least one order of magnitude faster and three orders of
magnitude more energy-efficient. We demonstrate how on-chip learning can
mitigate the effects of fixed-pattern noise, which is unavoidable in analog
substrates, while making use of temporal variability for action exploration.
Learning compensates imperfections of the physical substrate, as manifested in
neuronal parameter variability, by adapting synaptic weights to match
respective excitability of individual neurons.Comment: Added measurements with noise in NEST simulation, add notice about
journal publication. Frontiers in Neuromorphic Engineering (2019
Accelerated physical emulation of Bayesian inference in spiking neural networks
The massively parallel nature of biological information processing plays an
important role for its superiority to human-engineered computing devices. In
particular, it may hold the key to overcoming the von Neumann bottleneck that
limits contemporary computer architectures. Physical-model neuromorphic devices
seek to replicate not only this inherent parallelism, but also aspects of its
microscopic dynamics in analog circuits emulating neurons and synapses.
However, these machines require network models that are not only adept at
solving particular tasks, but that can also cope with the inherent
imperfections of analog substrates. We present a spiking network model that
performs Bayesian inference through sampling on the BrainScaleS neuromorphic
platform, where we use it for generative and discriminative computations on
visual data. By illustrating its functionality on this platform, we implicitly
demonstrate its robustness to various substrate-specific distortive effects, as
well as its accelerated capability for computation. These results showcase the
advantages of brain-inspired physical computation and provide important
building blocks for large-scale neuromorphic applications.Comment: This preprint has been published 2019 November 14. Please cite as:
Kungl A. F. et al. (2019) Accelerated Physical Emulation of Bayesian
Inference in Spiking Neural Networks. Front. Neurosci. 13:1201. doi:
10.3389/fnins.2019.0120
Cigarette Litigation and Products Liability: Did Someone Win the War or Have the Battle Lines Just Been Drawn - Cipollone v. Liggett Group, Inc.
Note
Pattern representation and recognition with accelerated analog neuromorphic systems
Despite being originally inspired by the central nervous system, artificial
neural networks have diverged from their biological archetypes as they have
been remodeled to fit particular tasks. In this paper, we review several
possibilites to reverse map these architectures to biologically more realistic
spiking networks with the aim of emulating them on fast, low-power neuromorphic
hardware. Since many of these devices employ analog components, which cannot be
perfectly controlled, finding ways to compensate for the resulting effects
represents a key challenge. Here, we discuss three different strategies to
address this problem: the addition of auxiliary network components for
stabilizing activity, the utilization of inherently robust architectures and a
training method for hardware-emulated networks that functions without perfect
knowledge of the system's dynamics and parameters. For all three scenarios, we
corroborate our theoretical considerations with experimental results on
accelerated analog neuromorphic platforms.Comment: accepted at ISCAS 201
- …
