55 research outputs found

    Parameter identification in networks of dynamical systems

    Get PDF
    Mathematical models of real systems allow to simulate their behavior in conditions that are not easily or affordably reproducible in real life. Defining accurate models, however, is far from trivial and there is no one-size-fits-all solution. This thesis focuses on parameter identification in models of networks of dynamical systems, considering three case studies that fall under this umbrella: two of them are related to neural networks and one to power grids. The first case study is concerned with central pattern generators, i.e. small neural networks involved in animal locomotion. In this case, a design strategy for optimal tuning of biologically-plausible model parameters is developed, resulting in network models able to reproduce key characteristics of animal locomotion. The second case study is in the context of brain networks. In this case, a method to derive the weights of the connections between brain areas is proposed, utilizing both imaging data and nonlinear dynamics principles. The third and last case study deals with a method for the estimation of the inertia constant, a key parameter in determining the frequency stability in power grids. In this case, the method is customized to different challenging scenarios involving renewable energy sources, resulting in accurate estimations of this parameter

    The numerical solution of neural field models posed on realistic cortical domains

    Get PDF
    The mathematical modelling of neural activity is a hugely complex and prominent area of exploration that has been the focus of many researchers since the mid 1900s. Although many advancements and scientific breakthroughs have been made, there is still a great deal that is not yet understood about the brain. There have been a considerable amount of studies in mathematical neuroscience that consider the brain as a simple one-dimensional or two-dimensional domain; however, this is not biologically realistic and is primarily selected as the domain of choice to aid analytical progress. The primary aim of this thesis is to develop and provide a novel suite of codes to facilitate the computationally efficient numerical solution of large-scale delay differential equations, and utilise this to explore both neural mass and neural field models with space-dependent delays. Through this, we seek to widen the scope of models of neural activity by posing them on realistic cortical domains and incorporating real brain data to describe non-local cortical connections. The suite is validated using a selection of examples that compare numerical and analytical results, along with recreating existing results from the literature. The relationship between structural connectivity and functional connectivity is then analysed as we use an eigenmode fitting approach to inform the desired stability regimes of a selection of neural mass models with delays. Here, we explore a next-generation neural mass model developed by Coombes and Byrne [36], and compare results to the more traditional Wilson-Cowan formulation [180, 181]. Finally, we examine a variety of solutions to three different neural field models that incorporate real structural connectivity, path length, and geometric surface data, using our NFESOLVE library to efficiently compute the numerical solutions. We demonstrate how the field version of the next-generation model can yield intricate and detailed solutions which push us closer to recreating observed brain dynamics

    The numerical solution of neural field models posed on realistic cortical domains

    Get PDF
    The mathematical modelling of neural activity is a hugely complex and prominent area of exploration that has been the focus of many researchers since the mid 1900s. Although many advancements and scientific breakthroughs have been made, there is still a great deal that is not yet understood about the brain. There have been a considerable amount of studies in mathematical neuroscience that consider the brain as a simple one-dimensional or two-dimensional domain; however, this is not biologically realistic and is primarily selected as the domain of choice to aid analytical progress. The primary aim of this thesis is to develop and provide a novel suite of codes to facilitate the computationally efficient numerical solution of large-scale delay differential equations, and utilise this to explore both neural mass and neural field models with space-dependent delays. Through this, we seek to widen the scope of models of neural activity by posing them on realistic cortical domains and incorporating real brain data to describe non-local cortical connections. The suite is validated using a selection of examples that compare numerical and analytical results, along with recreating existing results from the literature. The relationship between structural connectivity and functional connectivity is then analysed as we use an eigenmode fitting approach to inform the desired stability regimes of a selection of neural mass models with delays. Here, we explore a next-generation neural mass model developed by Coombes and Byrne [36], and compare results to the more traditional Wilson-Cowan formulation [180, 181]. Finally, we examine a variety of solutions to three different neural field models that incorporate real structural connectivity, path length, and geometric surface data, using our NFESOLVE library to efficiently compute the numerical solutions. We demonstrate how the field version of the next-generation model can yield intricate and detailed solutions which push us closer to recreating observed brain dynamics

    A Dynamical System Approach to modeling Mental Exploration

    Get PDF
    The hippocampal-entorhinal complex plays an essential role within the brain in spatial navigation, mapping a spatial path onto a sequence of cells that reaction potentials. During rest or sleep, these sequences are replayed in either reverse or forward temporal order; in some cases, novel sequences occur that may represent paths not yet taken, but connecting contiguous spatial locations. These sequences potentially play a role in the planning of future paths. In particular, mental exploration is needed to discover short-cuts or plan alternative routes. Hopeld proposed a two-dimensional planar attractor network as a substrate for the mental exploration. He extended the concept of a line attractor used for the ocular-motor apparatus, to a planar attractor that can memorize any spatial path and then recall this path in memory. Such a planar attractor contains an infinite number of fixed points for the dynamics, each fixed point corresponding to a spatial location. For symmetric connections in the network, the dynamics generally admits a Lyapunov energy function L. Movement through different fixed points is possible because of the continuous attractor structure. In this model, a key role is played by the evolution of a localized activation of the network, a "bump", that moves across this neural sheet that topographically represents space. For this to occur, the history of paths already taken is imprinted on the synaptic couplings between the neurons. Yet attractor dynamics would seem to preclude the bump from moving; hence, a mechanism that destabilizes the bump is required. The mechanism to destabilize such an activity bump and move it to other locations of the network involves an adaptation current that provides a form of delayed inhibition. Both a spin-glass and a graded-response approach are applied to investigating the dynamics of mental exploration mathematically. Simplifying the neural network proposed by Hopfield to a spin glass, I study the problem of recalling temporal sequences and explore an alternative proposal, that relies on storing the correlation of network activity across time, adding a sequence transition term to the classical instantaneous correlation term during the learning of the synaptic "adaptation current" is interpreted as a local field that can destabilize the equilibrium causing the bump to move. We can also combine the adaptation and transition term to show how the dynamics of exploration is affected. To obtain goal-directed searching, I introduce a weak external field associated with a rewarded location. We show how the bump trajectory then follows a suitable path to get to the target. For networks of graded-response neurons with weak external stimulation, amplitude equations known from pattern formation studies in bio-chemico- physical systems are developed. This allows me to predict the modes of network activity that can be selected by an external stimulus and how these modes evolve. Using perturbation theory and coarse graining, the dynamical equations for the evolution of the system are reduced from many sets of nonlinear integro-dierential equations for each neuron to a single macroscopic equation. This equation, in particular close to the transition to pattern formation, takes the form of the Landau Ginzburg equation. The parameters for the connections between the neurons are shown to be related to the parameters of the Landau-Ginzburg equation that governs the bump of activity. The role of adaptation within this approximation is studied, which leads to the discovery that the macroscopic dynamical equation for the system has the same structure of the coupled equations used to describe the propagation of the electrical activity within one single neuron as given by the Fitzhugh-Nagumo equations

    Theoretical and experimental investigation of anaesthetic effects in the brain

    Get PDF
    The main motivation of this study is to develop a better understanding of anaesthetic drug effects on brain dynamics including the paradoxical enhancement of seizure activity by some anaesthetic drugs. This thesis investigates two mean-field descriptions for the effect of general anaesthetic agents on brain activity: the extended Waikato cortical model (WM) and the Hindriks and van Putten (HvP) thalamocortical model. In the standard Waikato model, the population-average neuron voltage is determined by incoming activity at both electrical (gap-junction) and chemical synapses, the latter mediated by AMPA (excitatory) and GABAA (inhibitory) receptors. Here we extend the standard WM by including NMDA (excitatory) and GABAB (inhibitory) synapses. GABAergic anaesthetics, such as propofol, boost cortical inhibition by prolonging the tail of the unitary IPSP (inhibitory postsynaptic potential) at GABAA receptors, while increasing the synaptic gain at the slower-acting GABAB receptors. Dissociative anaesthetics act on NMDA receptors to give a voltage-dependent alteration of excitatory synaptic gain. We find that increasing GABAB or NMDA effect can alter the spatiotemporal dynamics of the standard WM, tending to suppress spatial (Turing) patterns in favour of temporal (Hopf) oscillations. The extended WM predicts increased susceptibility to seizure when GABAB effect is increased, particularly if the GABAergic agent reduces gap-junction diffusion. We tested these WM predictions with two biological experiments. We found that potentiation of GABAB receptors in slices of mouse cortical tissue tended to enhance seizure-like activity. However, our in vivo investigation of the effect of closure of gap junctions did not reveal any seizure patterns in mouse EEG signals. In the second part of this thesis, we present a detailed analysis of the HvP thalamocortical mean-field model for propofol anaesthesia. While we were able to confirm the Hindriks and van Putten predictions of increases in delta and alpha power at low levels of anaesthetic sedation, we find that for deeper anaesthetic effect, the model jumps from the low-firing state to an extremely high-firing stable state (~250 spikes/s), and remains stuck there even at GABAA prolongations as high as 300% which would be expected to induce full comatose suppression of all firing activity. To overcome this pathological behaviour, we tested two possible modifications: first, eliminating the population-dependent anaesthetic sensitivity (efficacy) of the HvP model; second, incorporating reversal potentials and tuning the excitatory sigmoid parameters defining the mapping from voltage to firing rate. The first modification removes the pathological state, but predicts de-creasing alpha and delta power as drug concentration increases. The second modification predicts induction-emergence hysteresis (drug concentration is higher at induction than at emergence), but the alpha rhythm is lost, being replaced by a dominant delta-band oscillation

    Work Toward a Theory of Brain Function

    Get PDF
    This dissertation reports research from 1971 to the present, performed in three parts. The first part arose from unilateral electrical stimulation of motivational/reward pathways in the lateral hypothalamus and brain stem of “split-brain” cats, in which the great cerebral commissures were surgically divided. This showed that motivation systems in split-brain animals exert joint influence upon learning in both of the divided cerebral hemispheres, in contrast to the separation of cognitive functions produced by commissurotomy. However, attempts to identify separate signatures of electrocortical activity associated with the diffuse motivational/alerting effects and those of the cortically lateralised processes failed to achieve this goal, and showed that an adequate model of cerebral information processing was lacking. The second part describes how this recognition of inadequacy led into computer simulations of large populations of cortical neurons – work which slowly led my colleagues and me to successful explanations of mechanisms for cortical synchrony and oscillation, and of evoked potentials and the global EEG. These results complemented the work of overseas groups led by Nunez, by Freeman, by Lopes da Silva and others, but also differed from the directions taken by these workers in certain important respects. It became possible to conceive of information transfer in the active cortex as a series of punctuated synchronous equilibria of signal exchange among cortical neurons – equilibria reached repeatedly, with sequential perturbations of the neural activity away from equilibrium caused by exogenous inputs and endogenous pulse-bursting, thus forming a basis for cognitive sequences. The third part reports how the explanation of synchrony gave rise to a new theory of the regulation of embryonic cortical growth and the emergence of mature functional connections. This work was based upon very different assumptions, and reaches very different conclusions, to that of pioneers of the field such as Hubel and Wiesel, whose ideas have dominated cortical physiology for more than fifty years. In conclusion, findings from all the stages of this research are linked together, to show they provide a sketch of the working brain, fitting within and helping to unify wider contemporary concepts of brain function

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    5th EUROMECH nonlinear dynamics conference, August 7-12, 2005 Eindhoven : book of abstracts

    Get PDF

    MACHINE LEARNING AUGMENTATION MICRO-SENSORS FOR SMART DEVICE APPLICATIONS

    Get PDF
    Novel smart technologies such as wearable devices and unconventional robotics have been enabled by advancements in semiconductor technologies, which have miniaturized the sizes of transistors and sensors. These technologies promise great improvements to public health. However, current computational paradigms are ill-suited for use in novel smart technologies as they fail to meet their strict power and size requirements. In this dissertation, we present two bio-inspired colocalized sensing-and-computing schemes performed at the sensor level: continuous-time recurrent neural networks (CTRNNs) and reservoir computers (RCs). These schemes arise from the nonlinear dynamics of micro-electro-mechanical systems (MEMS), which facilitates computing, and the inherent ability of MEMS devices for sensing. Furthermore, this dissertation addresses the high-voltage requirements in electrostatically actuated MEMS devices using a passive amplification scheme. The CTRNN architecture is emulated using a network of bistable MEMS devices. This bistable behavior is shown in the pull-in, the snapthrough, and the feedback regimes, when excited around the electrical resonance frequency. In these regimes, MEMS devices exhibit key behaviors found in biological neuronal populations. When coupled, networks of MEMS are shown to be successful at classification and control tasks. Moreover, MEMS accelerometers are shown to be successful at acceleration waveform classification without the need for external processors. MEMS devices are additionally shown to perform computing by utilizing the RC architecture. Here, a delay-based RC scheme is studied, which uses one MEMS device to simulate the behavior of a large neural network through input modulation. We introduce a modulation scheme that enables colocalized sensing-and-computing by modulating the bias signal. The MEMS RC is tested to successfully perform pure computation and colocalized sensing-and-computing for both classification and regression tasks, even in noisy environments. Finally, we address the high-voltage requirements of electrostatically actuated MEMS devices by proposing a passive amplification scheme utilizing the mechanical and electrical resonances of MEMS devices simultaneously. Using this scheme, an order-of-magnitude of amplification is reported. Moreover, when only electrical resonance is used, we show that the MEMS device exhibits a computationally useful bistable response. Adviser: Dr. Fadi Alsalee

    26th Annual Computational Neuroscience Meeting (CNS*2017): Part 3 - Meeting Abstracts - Antwerp, Belgium. 15–20 July 2017

    Get PDF
    This work was produced as part of the activities of FAPESP Research,\ud Disseminations and Innovation Center for Neuromathematics (grant\ud 2013/07699-0, S. Paulo Research Foundation). NLK is supported by a\ud FAPESP postdoctoral fellowship (grant 2016/03855-5). ACR is partially\ud supported by a CNPq fellowship (grant 306251/2014-0)
    • 

    corecore