1,299 research outputs found
A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines
Information in neural networks is represented as weighted connections, or
synapses, between neurons. This poses a problem as the primary computational
bottleneck for neural networks is the vector-matrix multiply when inputs are
multiplied by the neural network weights. Conventional processing architectures
are not well suited for simulating neural networks, often requiring large
amounts of energy and time. Additionally, synapses in biological neural
networks are not binary connections, but exhibit a nonlinear response function
as neurotransmitters are emitted and diffuse between neurons. Inspired by
neuroscience principles, we present a digital neuromorphic architecture, the
Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex
synaptic response functions without requiring additional hardware components.
We consider the paradigm of spiking neurons with temporally coded information
as opposed to non-spiking rate coded neurons used in most neural networks. In
this paradigm we examine liquid state machines applied to speech recognition
and show how a liquid state machine with temporal dynamics maps onto the
STPU-demonstrating the flexibility and efficiency of the STPU for instantiating
neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN
Spiking neural models & machine learning for systems neuroscience: Learning, Cognition and Behavior.
Learning, cognition and the ability to navigate, interact and manipulate the world around us by performing appropriate behavior are hallmarks of artificial as well as biological intelligence. In order to understand how intelligent behavior can emerge from computations of neural systems, this thesis suggests to consider and study learning, cognition and behavior simultaneously to obtain an integrative understanding. This involves building detailed functional computational models of nervous systems that can cope with sensory processing, learning, memory and motor control to drive appropriate behavior. The work further considers how the biological computational substrate of neurons, dendrites and action potentials can be successfully used as an alternative to current artificial systems to solve machine learning problems. It challenges the simplification of currently used rate-based artificial neurons, where computational power is sacrificed by mathematical convenience and statistical learning. To this end, the thesis explores single spiking neuron computations for cognition and machine learning problems as well as detailed functional networks thereof that can solve the biologically relevant foraging behavior in flying insects. The obtained results and insights are new and relevant for machine learning, neuroscience and computational systems neuroscience. The thesis concludes by providing an outlook how application of current machine learning methods can be used to obtain a statistical understanding of larger scale brain systems. In particular, by investigating the functional role of the cerebellar-thalamo-cortical system for motor control in primates
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
EXTRACTING NEURONAL DYNAMICS AT HIGH SPATIOTEMPORAL RESOLUTIONS: THEORY, ALGORITHMS, AND APPLICATION
Analyses of neuronal activity have revealed that various types of neurons, both at the single-unit and population level, undergo rapid dynamic changes in their response characteristics and their connectivity patterns in order to adapt to variations in the behavioral context or stimulus condition. In addition, these dynamics often admit parsimonious representations. Despite growing advances in neural modeling and data acquisition technology, a unified signal processing framework capable of capturing the adaptivity, sparsity and statistical characteristics of neural dynamics is lacking. The objective of this dissertation is to develop such a signal processing methodology in order to gain a deeper insight into the dynamics of neuronal ensembles underlying behavior, and consequently a better understanding of how brain functions.
The first part of this dissertation concerns the dynamics of stimulus-driven neuronal activity at the single-unit level. We develop a sparse adaptive filtering framework for the identification of neuronal response characteristics from spiking activity. We present a rigorous theoretical analysis of our proposed sparse adaptive filtering algorithms and characterize their performance guarantees. Application of our algorithms to experimental data provides new insights into the dynamics of attention-driven neuronal receptive field plasticity, with a substantial increase in temporal resolution.
In the second part, we focus on the network-level properties of neuronal dynamics, with the goal of identifying the causal interactions within neuronal ensembles that underlie behavior. Building up on the results of the first part, we introduce a new measure of causality, namely the Adaptive Granger Causality (AGC), which allows capturing the sparsity and dynamics of the causal influences in a neuronal network in a statistically robust and computationally efficient fashion. We develop a precise statistical inference framework for the estimation of AGC from simultaneous recordings of the activity of neurons in an ensemble.
Finally, in the third part we demonstrate the utility of our proposed methodologies through application to synthetic and real data. We first validate our theoretical results using comprehensive simulations, and assess the performance of the proposed methods in terms of estimation accuracy and tracking capability. These results confirm that our algorithms provide significant gains in comparison to existing techniques. Furthermore, we apply our methodology to various experimentally recorded data from electrophysiology and optical imaging: 1) Application of our methods to simultaneous spike recordings from the ferret auditory and prefrontal cortical areas reveals the dynamics of top-down and bottom-up functional interactions underlying attentive behavior at unprecedented spatiotemporal resolutions; 2) Our analyses of two-photon imaging data from the mouse auditory cortex shed light on the sparse dynamics of functional networks under both spontaneous activity and auditory tone detection tasks; and 3) Application of our methods to whole-brain light-sheet imaging data from larval zebrafish reveals unique insights into the organization of functional networks involved in visuo-motor processing
Unsupervised Visual Feature Learning with Spike-timing-dependent Plasticity: How Far are we from Traditional Feature Learning Approaches?
Spiking neural networks (SNNs) equipped with latency coding and spike-timing
dependent plasticity rules offer an alternative to solve the data and energy
bottlenecks of standard computer vision approaches: they can learn visual
features without supervision and can be implemented by ultra-low power hardware
architectures. However, their performance in image classification has never
been evaluated on recent image datasets. In this paper, we compare SNNs to
auto-encoders on three visual recognition datasets, and extend the use of SNNs
to color images. The analysis of the results helps us identify some bottlenecks
of SNNs: the limits of on-center/off-center coding, especially for color
images, and the ineffectiveness of current inhibition mechanisms. These issues
should be addressed to build effective SNNs for image recognition
- …