30,001 research outputs found
A black-box model for neurons
We explore the identification of neuronal voltage traces by artificial neural networks based on wavelets (Wavenet). More precisely, we apply a modification in the representation of dynamical systems by Wavenet which decreases the number of used functions; this approach combines localized and global scope functions (unlike Wavenet, which uses localized functions only). As a proof-of-concept, we focus on the identification of voltage traces obtained by simulation of a paradigmatic neuron model, the Morris-Lecar model. We show that, after training our artificial network with biologically plausible input currents, the network is able to identify the neuron's behaviour with high accuracy, thus obtaining a black box that can be then used for predictive goals. Interestingly, the interval of input currents used for training, ranging from stimuli for which the neuron is quiescent to stimuli that elicit spikes, shows the ability of our network to identify abrupt changes in the bifurcation diagram, from almost linear input-output relationships to highly nonlinear ones. These findings open new avenues to investigate the identification of other neuron models and to provide heuristic models for real neurons by stimulating them in closed-loop experiments, that is, using the dynamic-clamp, a well-known electrophysiology technique.Peer ReviewedPostprint (author's final draft
Neural system identification for large populations separating "what" and "where"
Neuroscientists classify neurons into different types that perform similar
computations at different locations in the visual field. Traditional methods
for neural system identification do not capitalize on this separation of 'what'
and 'where'. Learning deep convolutional feature spaces that are shared among
many neurons provides an exciting path forward, but the architectural design
needs to account for data limitations: While new experimental techniques enable
recordings from thousands of neurons, experimental time is limited so that one
can sample only a small fraction of each neuron's response space. Here, we show
that a major bottleneck for fitting convolutional neural networks (CNNs) to
neural data is the estimation of the individual receptive field locations, a
problem that has been scratched only at the surface thus far. We propose a CNN
architecture with a sparse readout layer factorizing the spatial (where) and
feature (what) dimensions. Our network scales well to thousands of neurons and
short recordings and can be trained end-to-end. We evaluate this architecture
on ground-truth data to explore the challenges and limitations of CNN-based
system identification. Moreover, we show that our network model outperforms
current state-of-the art system identification models of mouse primary visual
cortex.Comment: NIPS 201
Convex Optimization In Identification Of Stable Non-Linear State Space Models
A new framework for nonlinear system identification is presented in terms of
optimal fitting of stable nonlinear state space equations to input/output/state
data, with a performance objective defined as a measure of robustness of the
simulation error with respect to equation errors. Basic definitions and
analytical results are presented. The utility of the method is illustrated on a
simple simulation example as well as experimental recordings from a live
neuron.Comment: 9 pages, 2 figure, elaboration of same-title paper in 49th IEEE
Conference on Decision and Contro
Identification of Nonlinear Systems From the Knowledge Around Different Operating Conditions: A Feed-Forward Multi-Layer ANN Based Approach
The paper investigates nonlinear system identification using system output
data at various linearized operating points. A feed-forward multi-layer
Artificial Neural Network (ANN) based approach is used for this purpose and
tested for two target applications i.e. nuclear reactor power level monitoring
and an AC servo position control system. Various configurations of ANN using
different activation functions, number of hidden layers and neurons in each
layer are trained and tested to find out the best configuration. The training
is carried out multiple times to check for consistency and the mean and
standard deviation of the root mean square errors (RMSE) are reported for each
configuration.Comment: "6 pages, 9 figures; The Second IEEE International Conference on
Parallel, Distributed and Grid Computing (PDGC-2012), December 2012, Solan
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Locally embedded presages of global network bursts
Spontaneous, synchronous bursting of neural population is a widely observed
phenomenon in nervous networks, which is considered important for functions and
dysfunctions of the brain. However, how the global synchrony across a large
number of neurons emerges from an initially non-bursting network state is not
fully understood. In this study, we develop a new state-space reconstruction
method combined with high-resolution recordings of cultured neurons. This
method extracts deterministic signatures of upcoming global bursts in "local"
dynamics of individual neurons during non-bursting periods. We find that local
information within a single-cell time series can compare with or even
outperform the global mean field activity for predicting future global bursts.
Moreover, the inter-cell variability in the burst predictability is found to
reflect the network structure realized in the non-bursting periods. These
findings demonstrate the deterministic mechanisms underlying the locally
concentrated early-warnings of the global state transition in self-organized
networks
- …