128 research outputs found

    Noise-induced synchronization and anti-resonance in excitable systems; Implications for information processing in Parkinson's Disease and Deep Brain Stimulation

    Full text link
    We study the statistical physics of a surprising phenomenon arising in large networks of excitable elements in response to noise: while at low noise, solutions remain in the vicinity of the resting state and large-noise solutions show asynchronous activity, the network displays orderly, perfectly synchronized periodic responses at intermediate level of noise. We show that this phenomenon is fundamentally stochastic and collective in nature. Indeed, for noise and coupling within specific ranges, an asymmetry in the transition rates between a resting and an excited regime progressively builds up, leading to an increase in the fraction of excited neurons eventually triggering a chain reaction associated with a macroscopic synchronized excursion and a collective return to rest where this process starts afresh, thus yielding the observed periodic synchronized oscillations. We further uncover a novel anti-resonance phenomenon: noise-induced synchronized oscillations disappear when the system is driven by periodic stimulation with frequency within a specific range. In that anti-resonance regime, the system is optimal for measures of information capacity. This observation provides a new hypothesis accounting for the efficiency of Deep Brain Stimulation therapies in Parkinson's disease, a neurodegenerative disease characterized by an increased synchronization of brain motor circuits. We further discuss the universality of these phenomena in the class of stochastic networks of excitable elements with confining coupling, and illustrate this universality by analyzing various classical models of neuronal networks. Altogether, these results uncover some universal mechanisms supporting a regularizing impact of noise in excitable systems, reveal a novel anti-resonance phenomenon in these systems, and propose a new hypothesis for the efficiency of high-frequency stimulation in Parkinson's disease

    Optimal Control of Weakly Forced Nonlinear Oscillators

    Get PDF
    Optimal control of nonlinear oscillatory systems poses numerous theoretical and computational challenges. Motivated by applications in neuroscience, we develop tools and methods to synthesize optimal controls for nonlinear oscillators described by reduced order dynamical systems. Control of neural oscillations by external stimuli has a broad range of applications, ranging from oscillatory neurocomputers to deep brain stimulation for Parkinson\u27s disease. In this dissertation, we investigate fundamental limits on how neuron spiking behavior can be altered by the use of an external stimulus: control). Pontryagin\u27s maximum principle is employed to derive optimal controls that lead to desired spiking times of a neuron oscillator, which include minimum-power and time-optimal controls. In particular, we consider practical constraints in such optimal control designs including a bound on the control amplitude and the charge-balance constraint. The latter is important in neural stimulations used to avoid from the undesirable effects caused by accumulation of electric charge due to external stimuli. Furthermore, we extend the results in controlling a single neuron and consider a neuron ensemble. We, specifically, derive and synthesize time-optimal controls that elicit simultaneous spikes for two neuron oscillators. Robust computational methods based on homotopy perturbation techniques and pseudospectral approximations are developed and implemented to construct optimal controls for spiking and synchronizing a neuron ensemble, for which analytical solutions are intractable. We finally validate the optimal control strategies derived using the models of phase reduction by applying them to the corresponding original full state-space models. This validation is largely missing in the literature. Moreover, the derived optimal controls have been experimentally applied to control the synchronization of electrochemical oscillators. The methodology developed in this dissertation work is not limited to the control of neural oscillators and can be applied to a broad class of nonlinear oscillatory systems that have smooth dynamics

    Phase response function for oscillators with strong forcing or coupling

    Full text link
    Phase response curve (PRC) is an extremely useful tool for studying the response of oscillatory systems, e.g. neurons, to sparse or weak stimulation. Here we develop a framework for studying the response to a series of pulses which are frequent or/and strong so that the standard PRC fails. We show that in this case, the phase shift caused by each pulse depends on the history of several previous pulses. We call the corresponding function which measures this shift the phase response function (PRF). As a result of the introduction of the PRF, a variety of oscillatory systems with pulse interaction, such as neural systems, can be reduced to phase systems. The main assumption of the classical PRC model, i.e. that the effect of the stimulus vanishes before the next one arrives, is no longer a restriction in our approach. However, as a result of the phase reduction, the system acquires memory, which is not just a technical nuisance but an intrinsic property relevant to strong stimulation. We illustrate the PRF approach by its application to various systems, such as Morris-Lecar, Hodgkin-Huxley neuron models, and others. We show that the PRF allows predicting the dynamics of forced and coupled oscillators even when the PRC fails

    Recurrence-Based Synchronization Analysis of Weakly Coupled Bursting Neurons under External ELF Fields

    Get PDF
    We investigate the response characteristics of a two-dimensional neuron model exposed to an externally applied extremely low frequency (ELF) sinusoidal electric field and the synchronization of neurons weakly coupled with gap junction. We find, by numerical simulations, that neurons can exhibit different spiking patterns, which are well observed in the structure of the recurrence plot (RP). We further study the synchronization between weakly coupled neurons in chaotic regimes under the influence of a weak ELF electric field. In general, detecting the phases of chaotic spiky signals is not easy by using standard methods. Recurrence analysis provides a reliable tool for defining phases even for noncoherent regimes or spiky signals. Recurrence-based synchronization analysis reveals that, even in the range of weak coupling, phase synchronization of the coupled neurons occurs and, by adding an ELF electric field, this synchronization increases depending on the amplitude of the externally applied ELF electric field. We further suggest a novel measure for RP-based phase synchronization analysis, which better takes into account the probabilities of recurrences.Peer Reviewe

    Loss of synchrony in an inhibitory network of type-I oscillators

    Get PDF
    Synchronization of excitable cells coupled by reciprocal inhibition is a topic of significant interest due to the important role that inhibitory synaptic interaction plays in the generation and regulation of coherent rhythmic activity in a variety of neural systems. While recent work revealed the synchronizing influence of inhibitory coupling on the dynamics of many networks, it is known that strong coupling can destabilize phase-locked firing. Here we examine the loss of synchrony caused by an increase in inhibitory coupling in networks of type-I Morris-Lecar model oscillators, which is characterized by a period-doubling cascade and leads to mode-locked states with alternation in the firing order of the two cells, as reported recently by Maran and Canavier (2007) for a network of Wang-Buzsáki model neurons. Although alternating- order firing has been previously reported as a near-synchronous state, we show that the stable phase difference between the spikes of the two Morris-Lecar cells can constitute as much as 70% of the unperturbed oscillation period. Further, we examine the generality of this phenomenon for a class of type-I oscillators that are close to their excitation thresholds, and provide an intuitive geometric description of such leap-frog dynamics. In the Morris-Lecar model network, the alternation in the firing order arises under the condition of fast closing of K+ channels at hyperpolarized potentials, which leads to slow dynamics of membrane potential upon synaptic inhibition, allowing the presynaptic cell to advance past the postsynaptic cell in each cycle of the oscillation. Further, we show that non-zero synaptic decay time is crucial for the existence of leap-frog firing in networks of phase oscillators. However, we demonstrate that leap-frog spiking can also be obtained in pulse-coupled inhibitory networks of one-dimensional oscillators with a multi-branched phase domain, for instance in a network of quadratic integrate-and-fire model cells. Also, we show that the entire bifurcation structure of the network can be explained by a simple scaling of the STRC (spike- time response curve) amplitude, using a simplified quadratic STRC as an example, and derive the general conditions on the shape of the STRC function that leads to leap-frog firing. Further, for the case of a homogeneous network, we establish quantitative conditions on the phase resetting properties of each cell necessary for stable alternating-order spiking, complementing the analysis of Goel and Ermentrout (2002) of the order-preserving phase transition map. We show that the extension of STRC to negative values of phase is necessary to predict the response of a model cell to several close non-weak perturbations. This allows us for instance to accurately describe the dynamics of non-weakly coupled network of three model cells. Finally, the phase return map is also extended to the heterogenous network, and is used to analyze both the order-alternating firing and the order-preserving non-zero phase locked state in this case

    Data assimilation for conductance-based neuronal models

    Get PDF
    This dissertation illustrates the use of data assimilation algorithms to estimate unobserved variables and unknown parameters of conductance-based neuronal models. Modern data assimilation (DA) techniques are widely used in climate science and weather prediction, but have only recently begun to be applied in neuroscience. The two main classes of DA techniques are sequential methods and variational methods. Throughout this work, twin experiments, where the data is synthetically generated from output of the model, are used to validate use of these techniques for conductance-based models observing only the voltage trace. In Chapter 1, these techniques are described in detail and the estimation problem for conductance-based neuron models is derived. In Chapter 2, these techniques are applied to a minimal conductance-based model, the Morris-Lecar model. This model exhibits qualitatively different types of neuronal excitability due to changes in the underlying bifurcation structure and it is shown that the DA methods can identify parameter sets that produce the correct bifurcation structure even with initial parameter guesses that correspond to a different excitability regime. This demonstrates the ability of DA techniques to perform nonlinear state and parameter estimation, and introduces the geometric structure of inferred models as a novel qualitative measure of estimation success. Chapter 3 extends the ideas of variational data assimilation to include a control term to relax the problem further in a process that is referred to as nudging from the geoscience community. The nudged 4D-Var is applied to twin experiments from a more complex, Hodgkin-Huxley-type two-compartment model for various time-sampling strategies. This controlled 4D-Var with nonuniform time-samplings is then applied to voltage traces from current-clamp recordings of suprachiasmatic nucleus neurons in diurnal rodents to improve upon our understanding of the driving forces in circadian (~24) rhythms of electrical activity. In Chapter 4 the complementary strengths of 4D-Var and UKF are leveraged to create a two-stage algorithm that uses 4D-Var to estimate fast timescale parameters and UKF for slow timescale parameters. This coupled approach is applied to data from a conductance-based model of neuronal bursting with distinctive slow and fast time-scales present in the dynamics. In Chapter 5, the ideas of identifiability and sensitivity are introduced. The Morris-Lecar model and a subset of its parameters are shown to be identifiable through the use of numerical techniques. Chapter 6 frames the selection of stimulus waveforms to inject into neurons during patch-clamp recordings as an optimal experimental design problem. Results on the optimal stimulus waveforms for improving the identifiability of parameters for a Hodgkin-Huxley-type model are presented. Chapter 7 shows the preliminary application of data assimilation for voltage-clamp, rather than current-clamp, data and expands on voltage-clamp principles to formulate a reduced assimilation problem driven by the observed voltage. Concluding thoughts are given in Chapter 8

    Synchronization in dynamic neural networks

    Get PDF
    This thesis is concerned with the function and implementation of synchronization in networks of oscillators. Evidence for the existence of synchronization in cortex is reviewed and a suitable architecture for exhibiting synchronization is defined. A number of factors which affect the performance of synchronization in networks of laterally coupled oscillators are investigated. It is shown that altering the strength of the lateral connections between nodes and altering the connective scope of a network can be used to improve synchronization performance. It is also shown that complete connective scope is not required for global synchrony to occur. The effects of noise on synchronization performance are also investigated and it is shown that where an oscillator network is able to synchronize effectively, it will also be robust to a moderate level of noise in the lateral connections. Where a particular oscillator model shows poor synchronization performance, it is shown that noise in the lateral connections is capable of improving synchronization performance. A number of applications of synchronizing oscillator networks are investigated. The use of synchronized oscillations to encode global binding information is investigated and the relationship between the form of grouping obtained and connective scope is discussed. The potential for using learning in synchronizing oscillator networks is illustrated and an investigation is made into the possibility of maintaining multiple phases in a network of synchronizing oscillators. It is concluded from these investigations that it is difficult to maintain multiple phases in the network architecture used throughout this thesis and a modified architecture capable of producing the required behaviour is demonstrated

    Dynamics and Synchronization in Neuronal Models

    Get PDF
    La tesis está principalmente dedicada al modelado y simulación de sistemas neuronales. Entre otros aspectos se investiga el papel del ruido cuando actua sobre neuronas. El fenómeno de resonancia estocástica es caracterizado tanto a nivel teórico como reportado experimentalmente en un conjunto de neuronas del sistema motor. También se estudia el papel que juega la heterogeneidad en un conjunto de neuronas acopladas demostrando que la heterogeneidad en algunos parámetros de las neuronas puede mejorar la respuesta del sistema a una modulación periódica externa. También estudiamos del efecto de la topología y el retraso en las conexiones en una red neuronal. Se explora como las propiedades topológicas y los retrasos en la conducción de diferentes clases de redes afectan la capacidad de las neuronas para establecer una relación temporal bien definida mediante sus potenciales de acción. En particular, el concepto de consistencia se introduce y estudia en una red neuronal cuando plasticidad neuronal es tenida en cuenta entre las conexiones de la re
    corecore