422 research outputs found

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks

    Effect of Neuromodulation of Short-Term Plasticity on Information Processing in Hippocampal Interneuron Synapses

    Get PDF
    Neurons convey information about the complex dynamic environment in the form of signals. Computational neuroscience provides a theoretical foundation toward enhancing our understanding of nervous system. The aim of this dissertation is to present techniques to study the brain and how it processes information in particular neurons in hippocampus. We begin with a brief review of the history of neuroscience and biological background of basic neurons. To appreciate the importance of information theory, familiarity with the information theoretic basics is required, these basics are presented in Chapter 2. In Chapter 3, we use information theory to estimate the amount of information postsynaptic responses carry about the preceding temporal activity of hippocampal interneuron synapses and estimate the amount of synaptic memory. In Chapter 4, we infer parsimonious approximation of the data through analytical expression for calcium concentration and postsynaptic response distribution when calcium decay time is significantly smaller that the interspike intervals. In Chapter 5, we focus on the study and use of Causal State Splitting Reconstruction (CSSR) algorithm to capture the structure of the postsynaptic responses. The CSSR algorithm captures patterns in the data by building a machine in the form of visible Markov Models. One of the main advantages of CSSR with respect to Markov Models is that it builds states containing more than one histories, so the obtained machines are smaller than the equivalent Markov Model

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions

    Statistical approaches for synaptic characterization

    Get PDF
    Synapses are fascinatingly complex transmission units. One of the fundamental features of synaptic transmission is its stochasticity, as neurotransmitter release exhibits variability and possible failures. It is also quantised: postsynaptic responses to presynaptic stimulations are built up of several and similar quanta of current, each of them arising from the release of one presynaptic vesicle. Moreover, they are dynamic transmission units, as their activity depends on the history of previous spikes and stimulations, a phenomenon known as synaptic plasticity. Finally, synapses exhibit a very broad range of dynamics, features, and connection strengths, depending on neuromodulators concentration [5], the age of the subject [6], their localization in the CNS or in the PNS, or the type of neurons [7]. Addressing the complexity of synaptic transmission is a relevant problem for both biologists and theoretical neuroscientists. From a biological perspective, a finer understanding of transmission mechanisms would allow to study possibly synapse-related diseases, or to determine the locus of plasticity and homeostasis. From a theoretical perspective, different normative explanations for synaptic stochasticity have been proposed, including its possible role in uncertainty encoding, energy-efficient computation, or generalization while learning. A precise description of synaptic transmission will be critical for the validation of these theories and for understanding the functional relevance of this probabilistic and dynamical release. A central issue, which is common to all these areas of research, is the problem of synaptic characterization. Synaptic characterization (also called synaptic interrogation [8]) refers to a set of methods for exploring synaptic functions, inferring the value of synaptic parameters, and assessing features such as plasticity and modes of release. This doctoral work sits at the crossroads of experimental and theoretical neuroscience: its main aim is to develop statistical tools and methods to improve synaptic characterization, and hence to bring quantitative solutions to biological questions. In this thesis, we focus on model-based approaches to quantify synaptic transmission, for which different methods are reviewed in Chapter 3. By fitting a generative model of postsynaptic currents to experimental data, it is possible to infer the value of the synapse’s parameters. By performing model selection, we can compare different modelizations of a synapse and thus quantify its features. The main goal of this thesis is thus to develop theoretical and statistical tools to improve the efficiency of both model fitting and model selection. A first question that often arises when recording synaptic currents is how to precisely observe and measure a quantal transmission. As mentioned above, synaptic transmission has been observed to be quantised. Indeed, the opening of a single presynaptic vesicle (and the release of the neurotransmitters it contains) will create a stereotypical postsynaptic current q, which is called the quantal amplitude. As the number of activated presynaptic vesicles increases, the total postsynaptic current will increase in step-like increments of amplitude q. Hence, at chemical synapses, the postsynaptic responses to presynaptic stimulations are built up of k quanta of current, where k is a random variable corresponding to the number of open vesicles. Excitatory postsynaptic current (EPSC) thus follows a multimodal distribution, where each component has its mean located to a multiple kq with k 2 N and has a width corresponding to the recording noise σ. If σ is large with respect to q, these components will fuse into a unimodal distribution, impeding the possibility to identify quantal transmission and to compute q. How to characterize the regime of parameters in which quantal transmission can be identified? This question led us to define a practical identifiability criterion for statistical model, which is presented in Chapter 4. In doing so, we also derive a mean-field approach for fast likelihood computation (Appendix A) and discuss the possibility to use the Bayesian Information Criterion (a classically used model selection criterion) with correlated observations (Appendix B). A second question that is especially relevant for experimentalists is how to optimally stimulate the presynaptic cell in order to maximize the informativeness of the recordings. The parameters of a chemical synapse (namely, the number of presynaptic vesicles N, their release probability p, the quantal amplitude q, the short-term depression time constant τD, etc.) cannot be measured directly, but can be estimated from the synapse’s postsynaptic responses to evoked stimuli. However, these estimates critically depend on the stimulation protocol being used. For instance, if inter-spike intervals are too large, no short-term plasticity will appear in the recordings; conversely, a too high stimulation frequency will lead to a depletion of the presynaptic vesicles and to a poor informativeness of the postsynaptic currents. How to perform Optimal Experiment Design (OED) for synaptic characterization? We developed an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments (Chapter 5), and propose a link between our proposed definition of practical identifiability and Optimal Experiment Design for model selection (Chapter 6). Finally, a third biological question to which we ought to bring a theoretical answer is how to make sense of the observed organization of synaptic proteins. Microscopy observations have shown that presynaptic release sites and postsynaptic receptors are organized in ring-like patterns, which are disrupted upon genetic mutations. In Chapter 7, we propose a normative approach to this protein organization, and suggest that it might optimize a certain biological cost function (e.g. the mean current or SNR after vesicle release). The different theoretical tools and methods developed in this thesis are general enough to be applicable not only to synaptic characterization, but also to different experimental settings and systems studied in physiology. Overall, we expect to democratize and simplify the use of quantitative and normative approaches in biology, thus reducing the cost of experimentation in physiology, and paving the way to more systematic and automated experimental designs

    Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation

    Get PDF
    We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation on the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9

    The location of the axon initial segment affects the bandwidth of spike initiation dynamics

    Get PDF
    The dynamics and the sharp onset of action potential (AP) generation have recently been the subject of intense experimental and theoretical investigations. According to the resistive coupling theory, an electrotonic interplay between the site of AP initiation in the axon and the somato-dendritic load determines the AP waveform. This phenomenon not only alters the shape of AP recorded at the soma, but also determines the dynamics of excitability across a variety of time scales. Supporting this statement, here we generalize a previous numerical study and extend it to the quantification of the input-output gain of the neuronal dynamical response. We consider three classes of multicompartmental mathematical models, ranging from ball-and-stick simplified descriptions of neuronal excitability to 3D-reconstructed biophysical models of excitatory neurons of rodent and human cortical tissue. For each model, we demonstrate that increasing the distance between the axonal site of AP initiation and the soma markedly increases the bandwidth of neuronal response properties. We finally consider the Liquid State Machine paradigm, exploring the impact of altering the site of AP initiation at the level of a neuronal population, and demonstrate that an optimal distance exists to boost the computational performance of the network in a simple classification task. Copyright

    Towards a unified approach

    Get PDF
    "Decision-making in the presence of uncertainty is a pervasive computation. Latent variable decoding—inferring hidden causes underlying visible effects—is commonly observed in nature, and it is an unsolved challenge in modern machine learning. On many occasions, animals need to base their choices on uncertain evidence; for instance, when deciding whether to approach or avoid an obfuscated visual stimulus that could be either a prey or a predator. Yet, their strategies are, in general, poorly understood. In simple cases, these problems admit an optimal, explicit solution. However, in more complex real-life scenarios, it is difficult to determine the best possible behavior. The most common approach in modern machine learning relies on artificial neural networks—black boxes that map each input to an output. This input-output mapping depends on a large number of parameters, the weights of the synaptic connections, which are optimized during learning.(...)
    • …
    corecore