17 research outputs found
Dynamical systems techniques in the analysis of neural systems
As we strive to understand the mechanisms underlying neural computation, mathematical models are increasingly being used as a counterpart to biological experimentation. Alongside building such models, there is a need for mathematical techniques to be developed to examine the often complex behaviour that can arise from even the simplest models.
There are now a plethora of mathematical models to describe activity at the single neuron level, ranging from one-dimensional, phenomenological ones, to complex biophysical models with large numbers of state variables. Network models present even more of a challenge, as rich patterns of behaviour can arise due to the coupling alone.
We first analyse a planar integrate-and-fire model in a piecewise-linear regime. We advocate using piecewise-linear models as caricatures of nonlinear models, owing to the fact that explicit solutions can be found in the former. Through the use of explicit solutions that are available to us, we categorise the model in terms of its bifurcation structure, noting that the non-smooth dynamics involving the reset mechanism give rise to mathematically interesting behaviour. We highlight the pitfalls in using techniques for smooth dynamical systems in the study of non-smooth models, and show how these can be overcome using non-smooth analysis.
Following this, we shift our focus onto the use of phase reduction techniques in the analysis of neural oscillators. We begin by presenting concrete examples showcasing where these techniques fail to capture dynamics of the full system for both deterministic and stochastic forcing. To overcome these failures, we derive new coordinate systems which include some notion of distance from the underlying limit cycle. With these coordinates, we are able to capture the effect of phase space structures away from the limit cycle, and we go on to show how they can be used to explain complex behaviour in typical oscillatory neuron models
Understanding spiking and bursting electrical activity through piece-wise linear systems
In recent years there has been an increased interest in working with piece-wise linear caricatures of nonlinear models. Such models are often preferred over more detailed conductance based models for their small number of parameters and low computational overhead. Moreover, their piece-wise linear (PWL) form, allow the construction of action potential shapes in closed form as well as the calculation of phase response curves (PRC). With the inclusion of PWL adaptive currents they can also support bursting behaviour, though remain amenable to mathematical analysis at both the single neuron and network level. In fact, PWL models caricaturing conductance based models such as that of Morris-Lecar or McKean have also been studied for some time now and are known to be mathematically tractable at the network level.
In this work we proceed to analyse PWL neuron models of conductance type. In particular we focus on PWL models of the FitzHugh-Nagumo type and describe in detail the mechanism for a canard explosion. This model is further explored at the network level in the presence of gap junction coupling.
The study moves to a different area where excitable cells (pancreatic beta-cells) are used to explain insulin secretion phenomena. Here, Ca2+ signals obtained from pancreatic beta-cells of mice are extracted from image data and analysed using signal processing techniques. Both synchrony and functional connectivity analyses are performed. As regards to PWL bursting models we focus on a variant of the adaptive absolute IF model that can support bursting. We investigate the bursting electrical activity of such models with an emphasis on pancreatic beta-cells
Dynamical systems techniques in the analysis of neural systems
As we strive to understand the mechanisms underlying neural computation, mathematical models are increasingly being used as a counterpart to biological experimentation. Alongside building such models, there is a need for mathematical techniques to be developed to examine the often complex behaviour that can arise from even the simplest models.
There are now a plethora of mathematical models to describe activity at the single neuron level, ranging from one-dimensional, phenomenological ones, to complex biophysical models with large numbers of state variables. Network models present even more of a challenge, as rich patterns of behaviour can arise due to the coupling alone.
We first analyse a planar integrate-and-fire model in a piecewise-linear regime. We advocate using piecewise-linear models as caricatures of nonlinear models, owing to the fact that explicit solutions can be found in the former. Through the use of explicit solutions that are available to us, we categorise the model in terms of its bifurcation structure, noting that the non-smooth dynamics involving the reset mechanism give rise to mathematically interesting behaviour. We highlight the pitfalls in using techniques for smooth dynamical systems in the study of non-smooth models, and show how these can be overcome using non-smooth analysis.
Following this, we shift our focus onto the use of phase reduction techniques in the analysis of neural oscillators. We begin by presenting concrete examples showcasing where these techniques fail to capture dynamics of the full system for both deterministic and stochastic forcing. To overcome these failures, we derive new coordinate systems which include some notion of distance from the underlying limit cycle. With these coordinates, we are able to capture the effect of phase space structures away from the limit cycle, and we go on to show how they can be used to explain complex behaviour in typical oscillatory neuron models
The Dynamics of Adapting Neurons
How do neurons dynamically encode and treat information? Each neuron communicates with its distinctive language made of long silences intermitted by occasional spikes. The spikes are prompted by the pooled effect of a population of pre-synaptic neurons. To understand the operation made by single neurons is to create a quantitative description of their dynamics. The results presented in this thesis describe the necessary elements for a quantitative description of single neurons. Almost all chapters can be unified under the theme of adaptation. Neuronal adaptation plays an important role in the transduction of a given stimulation into a spike train. The work described here shows how adaptation is brought by every spike in a stereotypical fashion. The spike-triggered adaptation is then measured in three main types of cortical neurons. I analyze in detail how the different adaptation profiles can reproduce the diversity of firing patterns observed in real neurons. I also summarize the most recent results concerning the spike-time prediction in real neurons, resulting in a well-founded single-neuron model. This model is then analyzed to understand how populations can encode time-dependent signals and how time-dependent signals can be decoded from the activity of populations. Finally, two lines of investigation in progress are described, the first expands the study of spike-triggered adaptation on longer time scales and the second extends the quantitative neuron models to models with active dendrites
29th Annual Computational Neuroscience Meeting: CNS*2020
Meeting abstracts
This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests.
Virtual | 18-22 July 202
Recommended from our members
Identification of Dendritic Processing in Spiking Neural Circuits
A large body of experimental evidence points to sophisticated signal processing taking place at the level of dendritic trees and dendritic branches of neurons. This evidence suggests that, in addition to inferring the connectivity between neurons, identifying analog dendritic processing in individual cells is fundamentally important to understanding the underlying principles of neural computation. In this thesis, we develop a novel theoretical framework for the identification of dendritic processing directly from spike times produced by spiking neurons. The problem setting of spiking neurons is necessary since such neurons make up the majority of electrically excitable cells in most nervous systems and it is often hard or even impossible to directly monitor the activity within dendrites. Thus, action potentials produced by neurons often constitute the only causal and observable correlate of dendritic processing. In order to remain true to the underlying biophysics of electrically excitable cells, we employ well-established mechanistic models of action potential generation to describe the nonlinear mapping of the aggregate current produced by the tree into an asynchronous sequence of spikes. Specific models of spike generation considered include conductance-based models such as Hodgkin-Huxley, Morris-Lecar, Fitzhugh-Nagumo, as well as simpler models of the integrate-and-fire and threshold-and-fire type. The aggregate time-varying current driving the spike generator is taken to be produced by a dendritic stimulus processor, which is a nonlinear dynamical system capable of describing arbitrary linear and nonlinear transformations performed on one or more input stimuli. In the case of multiple stimuli, it can also describe the cross-coupling, or interaction, between various stimulus features. The behavior of the dendritic stimulus processor is fully captured by one or more kernels, which provide a characterization of the signal processing that is consistent with the broader cable theory description of dendritic trees. We prove that the neural identification problem, stated in terms of identifying the kernels of the dendritic stimulus processor, is mathematically dual to the neural population encoding problem. Specifically, we show that the collection of spikes produced by a single neuron in multiple experimental trials can be treated as a single multidimensional spike train of a population of neurons encoding the parameters of the dendritic stimulus processor. Using the theory of sampling in reproducing kernel Hilbert spaces, we then derive precise results demonstrating that, during any experiment, the entire neural circuit is projected onto the space of input stimuli and parameters of this projection are faithfully encoded in the spike train. Spike times are shown to correspond to generalized samples, or measurements, of this projection in a system of coordinates that is not fixed but is both neuron- and stimulus-dependent. We examine the theoretical conditions under which it may be possible to reconstruct the dendritic stimulus processor from these samples and derive corresponding experimental conditions for the minimum number of spikes and stimuli that need to be used. We also provide explicit algorithms for reconstructing the kernel projection and demonstrate that, under natural conditions, this projection converges to the true kernel. The developed methodology is quite general and can be applied to a number of neural circuits. In particular, the methods discussed span all sensory modalities, including vision, audition and olfaction, in which external stimuli are typically continuous functions of time and space. The results can also be applied to circuits in higher brain centers that receive multi-dimensional spike trains as input stimuli instead of continuous signals. In addition, the modularity of the approach allows one to extend it to mixed-signal circuits processing both continuous and spiking stimuli, to circuits with extensive lateral connections and feedback, as well as to multisensory circuits concurrently processing multiple stimuli of different dimensions, such as audio and video. Another important extension of the approach can be used to estimate the phase response curves of a neuron. All of the theoretical results are accompanied by detailed examples demonstrating the performance of the proposed identification algorithms. We employ both synthetic and naturalistic stimuli such as natural video and audio to highlight the power of the approach. Finally, we consider the implication of our work on problems pertaining to neural encoding and decoding and discuss promising directions for future research
Emergent Phenomena From Dynamic Network Models: Mathematical Analysis of EEG From People With IGE
In this thesis mathematical techniques and models are applied to electroencephalographic (EEG) recordings to study mechanisms of idiopathic generalised epilepsy (IGE). First, we compare network structures derived from resting-state EEG from people with IGE, their unaffected relatives, and healthy controls. Next, these static networks are combined with a dynamical model describing the ac- tivity of a cortical region as a population of phase-oscillators. We then examine the potential of the differences found in the static networks and the emergent properties of the dynamic network as individual biomarkers of IGE. The emphasis of this approach is on discerning the potential of these markers at the level of an indi- vidual subject rather than their ability to identify differences at a group level. Finally, we extend a dynamic model of seizure onset to investigate how epileptiform discharges vary over the course of the day in ambulatory EEG recordings from people with IGE. By per- turbing the dynamics describing the excitability of the system, we demonstrate the model can reproduce discharge distributions on an individual level which are shown to express a circadian tone. The emphasis of the model approach is on understanding how changes in excitability within brain regions, modulated by sleep, metabolism, endocrine axes, or anti-epileptic drugs (AEDs), can drive the emer- gence of epileptiform activity in large-scale brain networks.
Our results demonstrate that studying EEG recordings from peo- ple with IGE can lead to new mechanistic insight on the idiopathic nature of IGE, and may eventually lead to clinical applications. We show that biomarkers derived from dynamic network models perform significantly better as classifiers than biomarkers based on static network properties. Hence, our results provide additional ev- idence that the interplay between the dynamics of specific brain re- gions, and the network topology governing the interactions between these regions, is crucial in the generation of emergent epileptiform activity. Pathological activity may emerge due to abnormalities in either of those factors, or a combination of both, and hence it is essential to develop new techniques to characterise this interplay theoretically and to validate predictions experimentally
Complexity Science in Human Change
This reprint encompasses fourteen contributions that offer avenues towards a better understanding of complex systems in human behavior. The phenomena studied here are generally pattern formation processes that originate in social interaction and psychotherapy. Several accounts are also given of the coordination in body movements and in physiological, neuronal and linguistic processes. A common denominator of such pattern formation is that complexity and entropy of the respective systems become reduced spontaneously, which is the hallmark of self-organization. The various methodological approaches of how to model such processes are presented in some detail. Results from the various methods are systematically compared and discussed. Among these approaches are algorithms for the quantification of synchrony by cross-correlational statistics, surrogate control procedures, recurrence mapping and network models.This volume offers an informative and sophisticated resource for scholars of human change, and as well for students at advanced levels, from graduate to post-doctoral. The reprint is multidisciplinary in nature, binding together the fields of medicine, psychology, physics, and neuroscience