136 research outputs found

    Regulation of Irregular Neuronal Firing by Autaptic Transmission

    Get PDF
    The importance of self-feedback autaptic transmission in modulating spike-time irregularity is still poorly understood. By using a biophysical model that incorporates autaptic coupling, we here show that self-innervation of neurons participates in the modulation of irregular neuronal firing, primarily by regulating the occurrence frequency of burst firing. In particular, we find that both excitatory and electrical autapses increase the occurrence of burst firing, thus reducing neuronal firing regularity. In contrast, inhibitory autapses suppress burst firing and therefore tend to improve the regularity of neuronal firing. Importantly, we show that these findings are independent of the firing properties of individual neurons, and as such can be observed for neurons operating in different modes. Our results provide an insightful mechanistic understanding of how different types of autapses shape irregular firing at the single-neuron level, and they highlight the functional importance of autaptic self-innervation in taming and modulating neurodynamics.Comment: 27 pages, 8 figure

    Computational study of resting state network dynamics

    Get PDF
    Lo scopo di questa tesi è quello di mostrare, attraverso una simulazione con il software The Virtual Brain, le più importanti proprietà della dinamica cerebrale durante il resting state, ovvero quando non si è coinvolti in nessun compito preciso e non si è sottoposti a nessuno stimolo particolare. Si comincia con lo spiegare cos’è il resting state attraverso una breve revisione storica della sua scoperta, quindi si passano in rassegna alcuni metodi sperimentali utilizzati nell’analisi dell’attività cerebrale, per poi evidenziare la differenza tra connettività strutturale e funzionale. In seguito, si riassumono brevemente i concetti dei sistemi dinamici, teoria indispensabile per capire un sistema complesso come il cervello. Nel capitolo successivo, attraverso un approccio ‘bottom-up’, si illustrano sotto il profilo biologico le principali strutture del sistema nervoso, dal neurone alla corteccia cerebrale. Tutto ciò viene spiegato anche dal punto di vista dei sistemi dinamici, illustrando il pionieristico modello di Hodgkin-Huxley e poi il concetto di dinamica di popolazione. Dopo questa prima parte preliminare si entra nel dettaglio della simulazione. Prima di tutto si danno maggiori informazioni sul software The Virtual Brain, si definisce il modello di network del resting state utilizzato nella simulazione e si descrive il ‘connettoma’ adoperato. Successivamente vengono mostrati i risultati dell’analisi svolta sui dati ricavati, dai quali si mostra come la criticità e il rumore svolgano un ruolo chiave nell'emergenza di questa attività di fondo del cervello. Questi risultati vengono poi confrontati con le più importanti e recenti ricerche in questo ambito, le quali confermano i risultati del nostro lavoro. Infine, si riportano brevemente le conseguenze che porterebbe in campo medico e clinico una piena comprensione del fenomeno del resting state e la possibilità di virtualizzare l’attività cerebrale

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Computational Study of the Mechanisms Underlying Oscillation in Neuronal Locomotor Circuits

    Get PDF
    In this thesis we model two very different movement-related neuronal circuits, both of which produce oscillatory patterns of activity. In one case we study oscillatory activity in the basal ganglia under both normal and Parkinsonian conditions. First, we used a detailed Hodgkin-Huxley type spiking model to investigate the activity patterns that arise when oscillatory cortical input is transmitted to the globus pallidus via the subthalamic nucleus. Our model reproduced a result from rodent studies which shows that two anti-phase oscillatory groups of pallidal neurons appear under Parkinsonian conditions. Secondly, we used a population model of the basal ganglia to study whether oscillations could be locally generated. The basal ganglia are thought to be organised into multiple parallel channels. In our model, isolated channels could not generate oscillations, but if the lateral inhibition between channels is sufficiently strong then the network can act as a rhythm-generating ``pacemaker'' circuit. This was particularly true when we used a set of connection strength parameters that represent the basal ganglia under Parkinsonian conditions. Since many things are not known about the anatomy and electrophysiology of the basal ganglia, we also studied oscillatory activity in another, much simpler, movement-related neuronal system: the spinal cord of the Xenopus tadpole. We built a computational model of the spinal cord containing approximately 1,500 biologically realistic Hodgkin-Huxley neurons, with synaptic connectivity derived from a computational model of axon growth. The model produced physiological swimming behaviour and was used to investigate which aspects of axon growth and neuron dynamics are behaviourally important. We found that the oscillatory attractor associated with swimming was remarkably stable, which suggests that, surprisingly, many features of axonal growth and synapse formation are not necessary for swimming to emerge. We also studied how the same spinal cord network can generate a different oscillatory pattern in which neurons on both sides of the body fire synchronously. Our results here suggest that under normal conditions the synchronous state is unstable or weakly stable, but that even small increases in spike transmission delays act to stabilise it. Finally, we found that although the basal ganglia and the tadpole spinal cord are very different systems, the underlying mechanism by which they can produce oscillations may be remarkably similar. Insights from the tadpole model allow us to predict how the basal ganglia model may be capable of producing multiple patterns of oscillatory activity

    Dynamics and precursor signs for phase transitions in neural systems

    Get PDF
    This thesis investigates neural state transitions associated with sleep, seizure and anaesthesia. The aim is to address the question: How does a brain traverse the critical threshold between distinct cortical states, both healthy and pathological? Specifically we are interested in sub-threshold neural behaviour immediately prior to state transition. We use theoretical neural modelling (single spiking neurons, a network of these, and a mean-field continuum limit) and in vitro experiments to address this question. Dynamically realistic equations of motion for thalamic relay neuron, reticular nuclei, cortical pyramidal and cortical interneuron in different vigilance states are developed, based on the Izhikevich spiking neuron model. A network of cortical neurons is assembled to examine the behaviour of the gamma-producing cortical network and its transition to lower frequencies due to effect of anaesthesia. Then a three-neuron model for the thalamocortical loop for sleep spindles is presented. Numerical simulations of these networks confirms spiking consistent with reported in vivo measurement results, and provides supporting evidence for precursor indicators of imminent phase transition due to occurrence of individual spindles. To complement the spiking neuron networks, we study the Wilson–Cowan neural mass equations describing homogeneous cortical columns and a 1D spatial cluster of such columns. The abstract representation of cortical tissue by a pair of coupled integro-differential equations permits thorough linear stability, phase plane and bifurcation analyses. This model shows a rich set of spatial and temporal bifurcations marking the boundary to state transitions: saddle-node, Hopf, Turing, and mixed Hopf–Turing. Close to state transition, white-noise-induced subthreshold fluctuations show clear signs of critical slowing down with prolongation and strengthening of autocorrelations, both in time and space, irrespective of bifurcation type. Attempts at in vitro capture of these predicted leading indicators form the last part of the thesis. We recorded local field potentials (LFPs) from cortical and hippocampal slices of mouse brain. State transition is marked by the emergence and cessation of spontaneous seizure-like events (SLEs) induced by bathing the slices in an artificial cerebral spinal fluid containing no magnesium ions. Phase-plane analysis of the LFP time-series suggests that distinct bifurcation classes can be responsible for state change to seizure. Increased variance and growth of spectral power at low frequencies (f < 15 Hz) was observed in LFP recordings prior to initiation of some SLEs. In addition we demonstrated prolongation of electrically evoked potentials in cortical tissue, while forwarding the slice to a seizing regime. The results offer the possibility of capturing leading temporal indicators prior to seizure generation, with potential consequences for understanding epileptogenesis. Guided by dynamical systems theory this thesis captures evidence for precursor signs of phase transitions in neural systems using mathematical and computer-based modelling as well as in vitro experiments

    Locomotor patterns and persistent activity in self-organizing neural models

    Get PDF
    The thesis investigates principles of self-organization that may account for the observed structure and behaviour of neural networks that generate locomotor behaviour and complex spatiotemporal patterns such as spiral waves, metastable states and persistent activity. This relates to the general neuroscience problem of finding the correspondence between the structure of neural networks and their function. This question is both extremely important and difficult to answer because the structure of a neural network defines a specific type of neural dynamics which underpins some function of the neural system and also influences the structure and parameters of the network including connection strengths. This loop of influences results in a stable and reliable neural dynamics that realises a neural function. In order to study the relationship between neural network structure and spatiotemporal dynamics, several computational models of plastic neural networks with different architectures are developed. Plasticity includes both modification of synaptic connection strengths and adaptation of neuronal thresholds. This approach is based on a consideration of general modelling concepts and focuses on a relatively simple neural network which is still complex enough to generate a broad spectrum of spatio-temporal patterns of neural activity such as spiral waves, persistent activity, metastability and phase transitions. Having considered the dynamics of networks with fixed architectures, we go on to consider the question of how a neural circuit which realizes some particular function establishes its architecture of connections. The approach adopted here is to model the developmental process which results in a particular neural network structure which is relevant to some particular functionality; specifically we develop a biologically realistic model of the tadpole spinal cord. This model describes the self-organized process through which the anatomical structure of the full spinal cord of the tadpole develops. Electrophysiological modelling shows that this architecture can generate electrical activity corresponding to the experimentally observed swimming behaviour

    Modeling and Simulation Methods of Neuronal Populations and Neuronal Networks

    Full text link
    This thesis presents numerical methods and modeling related to simulating neurons. Two approaches to the simulation are taken: a population density approach and a neuronal network approach. The first two chapters present the results from the population density approach and its applications. The population density approach assumes that each neuron can be identified by its states (e.g., membrane potential, conductance of ion channels). Additionally, it assumes the population is large such that it can be approximated by a continuous population density distribution in the state space. By updating this population density, we can learn the macroscopic behavior of the population, such as the average firing rate and average membrane potential. The Population density approach avoids the need to simulate every single neuron when the population is large. While many previous population-density methods, such as the mean-field method, make further simplifications to the models, we developed the Asymmetric Particle Population Density (APPD) method to simulate the population density directly without the need to simplify the dynamics of the model. This enables us to simulate the macroscopic properties of coupled neuronal populations as accurately as a direct simulation. The APPD method tracks multiple asymmetric Gaussians as they advance in time due to a convection-diffusion equation, and our main theoretical innovation is deriving this update algorithm by tracking a level set. Tracking a single Gaussian is also applicable to the Bayesian filtering for continuous-discrete systems. By adding a measurement-update step, we reformulated our tracking method as the Level Set Kalman Filter(LSKF) method and find that it offers greater accuracy than state-of-the-art methods. Chapter IV presents the methods for direct simulation of a neuronal network. For this approach, the aim is to build a high-performance and expandable framework that can be used to simulate various neuronal networks. The implementation is done on GPUs using CUDA, and this framework enables simulation for millions of neurons on a high-performance desktop computer. Additionally, real-time visualization of neuron activities is implemented. Pairing with the simulation framework, a detailed mouse cortex model with experiment-determined morphology using the CUBIC-Atlas, and neuron connectome information from Allen's brain atlas is generated.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169840/1/nywang_1.pd

    Capture of fixation by rotational flow; a deterministic hypothesis regarding scaling and stochasticity in fixational eye movements.

    Get PDF
    Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modeled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of oculomotor postural control. We identify signatures reminiscent of a certain flavor of transient neurodynamics; toric traveling waves which rotate around a central phase singularity. Spiral waves play an organizational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales
    corecore