24 research outputs found

    Synchronization in dynamical networks:synchronizability, neural network models and EEG analysis

    Get PDF
    Complex dynamical networks are ubiquitous in many fields of science from engineering to biology, physics, and sociology. Collective behavior, and in particular synchronization,) is one of the most interesting consequences of interaction of dynamical systems over complex networks. In this thesis we study some aspects of synchronization in dynamical networks. The first section of the study discuses the problem of synchronizability in dynamical networks. Although synchronizability, i.e. the ease by which interacting dynamical systems can synchronize their activity, has been frequently used in research studies, there is no single interpretation for that. Here we give some possible interpretations of synchronizability and investigate to what extent they coincide. We show that in unweighted dynamical networks different interpretations of synchronizability do not lie in the same line, in general. However, in networks with high degrees of synchronization properties, the networks with properly assigned weights for the links or the ones with well-performed link rewirings, the different interpretations of synchronizability go hand in hand. We also show that networks with nonidentical diffusive connections whose weights are assigned using the connection-graph-stability method are better synchronizable compared to networks with identical diffusive couplings. Furthermore, we give an algorithm based on node and edge betweenness centrality measures to enhance the synchronizability of dynamical networks. The algorithm is tested on some artificially constructed dynamical networks as well as on some real-world networks from different disciplines. In the second section we study the synchronization phenomenon in networks of Hindmarsh-Rose neurons. First, the complete synchronization of Hindmarsh-Rose neurons over Newman-Watts networks is investigated. By numerically solving the differential equations of the dynamical network as well as using the master-stability-function method we determine the synchronizing coupling strength for diffusively coupled Hindmarsh-Rose neurons. We also consider clustered networks with dense intra-cluster connections and sparse inter-cluster links. In such networks, the synchronizability is more influenced by the inter-cluster links than intra-cluster connections. We also consider the case where the neurons are coupled through both electrical and chemical connections and obtain the synchronizing coupling strength using numerical calculations. We investigate the behavior of interacting locally synchronized gamma oscillations. We construct a network of minimal number of neurons producing synchronized gamma oscillations. By simulating giant networks of this minimal module we study the dependence of the spike synchrony on some parameters of the network such as the probability and strength of excitatory/inhibitory couplings, parameter mismatch, correlation of thalamic input and transmission time-delay. In the third section of the thesis we study the interdependencies within the time series obtained through electroencephalography (EEG) and give the EEG specific maps for patients suffering from schizophrenia or Alzheimer's disease. Capturing the collective coherent spatiotemporal activity of neuronal populations measured by high density EEG is addressed using measures estimating the synchronization within multivariate time series. Our EEG power analysis on schizophrenic patients, which is based on a new parametrization of the multichannel EEG, shows a relative increase of power in alpha rhythm over the anterior brain regions against its reduction over posterior regions. The correlations of these patterns with the clinical picture of schizophrenia as well as discriminating of the schizophrenia patients from normal control subjects supports the concept of hypofrontality in schizophrenia and renders the alpha rhythm as a sensitive marker of it. By applying a multivariate synchronization estimator, called S-estimator, we reveal the whole-head synchronization topography in schizophrenia. Our finding shows bilaterally increased synchronization over temporal brain regions and decreased synchronization over the postcentral/parietal brain regions. The topography is stable over the course of several months as well as over all conventional EEG frequency bands. Moreover, it correlates with the severity of the illness characterized by positive and negative syndrome scales. We also reveal the EEG features specific to early Alzheimer's disease by applying multivariate phase synchronization method. Our analyses result in a specific map characterized by a decrease in the values of phase synchronization over the fronto-temporal and an increase over temporo-parieto-occipital region predominantly of the left hemisphere. These abnormalities in the synchronization maps correlate with the clinical scores associated to the patients and are able to discriminate patients from normal control subjects with high precision

    Simulation and Theory of Large-Scale Cortical Networks

    Get PDF
    Cerebral cortex is composed of intricate networks of neurons. These neuronal networks are strongly interconnected: every neuron receives, on average, input from thousands or more presynaptic neurons. In fact, to support such a number of connections, a majority of the volume in the cortical gray matter is filled by axons and dendrites. Besides the networks, neurons themselves are also highly complex. They possess an elaborate spatial structure and support various types of active processes and nonlinearities. In the face of such complexity, it seems necessary to abstract away some of the details and to investigate simplified models. In this thesis, such simplified models of neuronal networks are examined on varying levels of abstraction. Neurons are modeled as point neurons, both rate-based and spike-based, and networks are modeled as block-structured random networks. Crucially, on this level of abstraction, the models are still amenable to analytical treatment using the framework of dynamical mean-field theory. The main focus of this thesis is to leverage the analytical tractability of random networks of point neurons in order to relate the network structure, and the neuron parameters, to the dynamics of the neurons—in physics parlance, to bridge across the scales from neurons to networks. More concretely, four different models are investigated: 1) fully connected feedforward networks and vanilla recurrent networks of rate neurons; 2) block-structured networks of rate neurons in continuous time; 3) block-structured networks of spiking neurons; and 4) a multi-scale, data-based network of spiking neurons. We consider the first class of models in the light of Bayesian supervised learning and compute their kernel in the infinite-size limit. In the second class of models, we connect dynamical mean-field theory with large-deviation theory, calculate beyond mean-field fluctuations, and perform parameter inference. For the third class of models, we develop a theory for the autocorrelation time of the neurons. Lastly, we consolidate data across multiple modalities into a layer- and population-resolved model of human cortex and compare its activity with cortical recordings. In two detours from the investigation of these four network models, we examine the distribution of neuron densities in cerebral cortex and present a software toolbox for mean-field analyses of spiking networks

    Synaptic Plasticity and Hebbian Cell Assemblies

    Get PDF
    Synaptic dynamics are critical to the function of neuronal circuits on multiple timescales. In the first part of this dissertation, I tested the roles of action potential timing and NMDA receptor composition in long-term modifications to synaptic efficacy. In a computational model I showed that the dynamics of the postsynaptic [Ca2+] time course can be used to map the timing of pre- and postsynaptic action potentials onto experimentally observed changes in synaptic strength. Using dual patch-clamp recordings from cultured hippocampal neurons, I found that NMDAR subtypes can map combinations of pre- and postsynaptic action potentials onto either long-term potentiation (LTP) or depression (LTD). LTP and LTD could even be evoked by the same stimuli, and in such cases the plasticity outcome was determined by the availability of NMDAR subtypes. The expression of LTD was increasingly presynaptic as synaptic connections became more developed. Finally, I found that spike-timing-dependent potentiability is history-dependent, with a non-linear relationship to the number of pre- and postsynaptic action potentials. After LTP induction, subsequent potentiability recovered on a timescale of minutes, and was dependent on the duration of the previous induction. While activity-dependent plasticity is putatively involved in circuit development, I found that it was not required to produce small networks capable of exhibiting rhythmic persistent activity patterns called reverberations. However, positive synaptic scaling produced by network inactivity yielded increased quantal synaptic amplitudes, connectivity, and potentiability, all favoring reverberation. These data suggest that chronic inactivity upregulates synaptic efficacy by both quantal amplification and by the addition of silent synapses, the latter of which are rapidly activated by reverberation. Reverberation in previously inactivated networks also resulted in activity-dependent outbreaks of spontaneous network activity. Applying a model of short-term synaptic dynamics to the network level, I argue that these experimental observations can be explained by the interaction between presynaptic calcium dynamics and short-term synaptic depression on multiple timescales. Together, the experiments and modeling indicate that ongoing activity, synaptic scaling and metaplasticity are required to endow networks with a level of synaptic connectivity and potentiability that supports stimulus-evoked persistent activity patterns but avoids spontaneous activity

    Modelling Structure and Dynamics of Complex Systems: Applications to Neuronal Networks

    Get PDF
    Complex systems theory is a mathematical framework for studying interconnected dynamical objects. Usually these objects themselves are by construction simple, and their temporal behavior in isolation is easily predictable, but the way they are interconnected into a network allows emergence of complex, non-obvious phenomena. The emergent phenomena and their stability are dependent on both the intrinsic dynamics of the objects, the types of interactions between the objects, and the connectivity patterns between the objects. This work focuses on the third aspect, i.e., the structure of the network, although the other two aspects are inherently present in the study as well. Tools from graph theory are applied to generate and analyze the network structure, and the effect of the structure on the network dynamics is analyzed by various methods. The objects of interest are biological and physical systems, and special attention is given to spiking neuronal networks, i.e., networks of nerve cells that communicate by transmitting and receiving action potentials. In this thesis, methods for modelling spiking neuronal networks are introduced. Different point neuron models, including the integrate-and-fire model, are presented and applied to study the collective behaviour of the neurons. Special focus is placed on the emergence of network bursts, i.e., short periods of network-wide high-frequency firing. The occurrence of this behaviour is stable in certain regimes of connection strengths. This work shows that the network bursting is found to be more frequent in locally connected networks than in non-local networks, such as randomly connected networks. To gain a deeper insight, the aspects of structure that promote the bursting behaviour are analyzed by graph-theoretic means. The clustering coefficient and the maximal eigenvalue of the connectivity matrix are found the most important measures of structure in this matter, both expressing their relevance under different structural conditions. A range of different network structures are applied to confirm this result. A special class of connectivity is studied in more detail, namely, the connectivity patterns produced by simulations of growing and interconnecting neurons placed on a 2-dimensional array. Two simulators of growth are applied for this purpose. In addition, a more abstract class of dynamical systems, the Boolean networks, are considered. These systems were originally introduced as a model for genetic regulatory networks, but have thereafter been extensively used for more general studies of complex systems. In this work, measures of information diversity and complexity are applied to several types of systems that obey Boolean dynamics. The random Boolean networks are shown to possess high temporal complexity prior to reaching an attractor. Similarly, high values of complexity are found at a transition stage of another dynamical system, the lattice gas automaton, which can be formulated using the Boolean network framework as well. The temporal maximization of the complexity near the transitions between different dynamical regimes could therefore be a more general phenomenon in complex networks. The applicability of the information-theoretic framework is also confirmed in a study of bursting neuronal networks, where different types of networks are shown to be separable by the intrinsic information distance distributions they produce. The connectivities of the networks studied in this thesis are analyzed using graph-theoretic tools. The graph theory provides a mathematical framework for studying the structure of complex systems and how it affects the system dynamics. In the studies of the nervous system, detailed maps on the connections between neurons have been collected, although such data are yet scarce and laborious to obtain experimentally. This work shows which aspects of the structure are relevant for the dynamics of spontaneously bursting neuronal networks. Such information could be useful in directing the experiments to measure only the relevant aspects of the structure instead of assessing the whole connectome. In addition, the framework of generating the network structure by animating the growth of the neurons, as presented in this thesis, could serve in simulations of the nervous system as a reliable alternative to importing the experimentally obtained connectome

    Cerebellar Codings for Control of Compensatory Eye Movements

    Get PDF
    This thesis focuses on the control of the cerebellum on motor behaviour, and more specifically on the role of the cerebellar Purkinje cells in exerting this control. As the cerebellum is an online control system, we look at both motor performance and learning, trying to identify components involved at the molecular, cellular and network level. To study the cerebellum we used the vestibulocerebellum, with visual and vestibular stimulation as input and eye movements as recorded output. The advantage of the vestibulocerebellum over other parts is that the input given is highly controllable, while the output can be reliably measured, and performance and learning can be easily studied. In addition, we conducted electrophysiological recordings from the vestibulocerebellum, in particular of Purkinje cells in the flocculus. Combining the spiking behaviour of Purkinje cells with visual input and eye movement output allowed us to study how the cerebellum functions and using genetically modified animals we could determine the role of different elements in this system. To provide some insights in the techniques used and the theory behind them, we will discuss the following topics in this introduction: compensatory eye movements, the anatomy of pathways to, within and out of the flocculus, the cellular physiology of Purkinje cells in relation to performance and the plasticity mechanisms related to motor learning

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    corecore