12 research outputs found

    Model Order Reduction for Modeling the Brain

    Get PDF
    Tässä väitöskirjassa tutkimme Model Order Reduction (MOR) -menetelmien käyttöä aivosimulaatioiden vaatimien laskentaresurssien pienentämiseksi ja laskenta-ajan nopeuttamiseksi. Matemaattinen mallintaminen ja numeeriset menetelmät, kuten simulaatiot, ovat tärkeimpiä työkaluja laskennallisessa neurotieteessä, jossa pyritään ymmärtämään aivojen toimintaa dataa ja teoriaa yhdistämällä. Aivosolujen ja niiden muodostamien soluverkostojen monimutkaisuudesta johtuen tietokonesimulaatiot eivät voi sisältää kaikkia biologisesti realistisia yksityiskohtia. MOR-menetelmiä käyttäen johdamme redusoituja malleja ja näytämme, että niillä on mahdollista approksimoida hermosoluverkostomalleja. Redusoidut mallit saattavat mahdollistaa entistä tarkempien tai suuren mittakaavan hermosoluverkostojen simulaatiot. Valitsimme tähän tutkimukseen redusoinnin kohteiksi useita neurotieteessä rele- vantteja matemaattisia malleja, alkaen synaptisesta viestinnästä aivojen populaatiotason malleihin. Simuloimme malleja numeerisesti ja määritimme matemaattiset vaatimukset MOR-menetelmien soveltamiseksi jokaiseen malliin. Seuraavaksi tunnistimme kullekin mallille sopivat MOR-algoritmit ja toteutimme valitsemamme menetelmät laskennallisesti tehokkaalla tavalla. Lopuksi arvioimme redusoitujen mallien tarkkuutta ja nopeutta. Tutkimuksemme soveltavat MOR-menetelmiä mallityyppeihin, joita ei ole aiemmin tutkittu kyseisillä menetelmillä, laajentaen mahdollisuuksia MORin käyttöön laskennallisessa neurotieteessä sekä myös koneoppimisessa. Tutkimuksemme osoittavat, että MOR voi olla tehokas nopeutusstrategia hermosoluverkostomalleille ja keinotekoisille neuroverkoille, mikä tekee siitä arvokkaan työkalun aivojen laskennallisessa tutkimuksessa. MOR-menetelmät ovat hyödyllisiä, sillä redusoidun mallin perusteella on mahdollista rekonstruoida alkuperäinen malli. Redusointi ei poista mallista muuttujia tai heikennä sen morfologista resoluutiota. Tunnistimme Proper Orthogonal Decom- position (POD) -menetelmän yhdistettynä Discrete Empirical Interpolation Method (DEIM) -algoritmiin sopivaksi menetelmäksi valitsemiemme mallien redusointiin. Lisäksi otimme käyttöön useita viimeaikaisia edistyneitä muunnelmia näistä menetel-mistä. Ensisijainen este MOR-menetelmien soveltamiselle neurotieteessä on hermosolumallien epälineaarisuus. POD-DEIM -menetelmää voidaan käyttää myös epälineaaristen mallien redusointiin. Balanced Truncation ja Iterative Rational Krylovin Approximation -menetelmien muunnelmat epälineaaristen mallien approksimoin- tiin ovat myös lupaavia, mutta niiden käyttö vaatii redusoitavalta mallilta enemmän matemaattisia ominaisuuksia verrattuna POD-DEIM -menetelmiin. Saavutimme erinomaisen approksimaatiotarkkuuden ja nopeutuksen redusoimalla moniulotteista hermosolupopulaatiomallia ja synapsin kemiallisia reaktioita kuvaavaa mallia käyttämällä POD-DEIM -menetelmää. Biofysikaalisesti tarkan verkosto- mallin, joka kuvaa aktiopotentiaalin muodostumista ionivirtojen kautta, redusoinnin huomattiin hyötyvän simulaation aikana redusoitua mallia päivittävien MOR- menetelmien käytöstä. Osoitimme lisäksi, että MOR voidaan integroida syväoppimisverkkoihin ja että MOR on tehokas redusointistrategia konvoluutioverkkoihin, joita käytetään esimerkiksi näköhermoston tutkimuksessa. Tuloksemme osoittavat, että MOR on tehokas työkalu epälineaaristen hermo- soluverkostojen simulaatioiden nopeuttamiseen. Tämän väitöskirjan osajulkaisujen perusteella voimme todeta, että useita neurotieteellisesti relevantteja malleja ja mallityyppejä, joita ei ole aiemmin redusoitu, voidaan nopeuttaa käyttämällä MOR- menetelmiä. Tulevaisuudessa MOR-menetelmien integrointi aivosimulaatiotyökaluihin mahdollistaa mallien nopeamman kehittämisen ja uuden tiedon luomisen numeeristen simulaatioiden tehokkuutta, resoluutiota ja mittakaavaa parantamalla.In this thesis, we study the use of Model Order Reduction (MOR) methods for accelerating and reducing the computational burden of brain simulations. Mathematical modeling and numerical simulations are the primary tools of computational neuroscience, a field that strives to understand the brain by combining data and theories. Due to the complexity of brain cells and the neuronal networks they form, computer simulations cannot consider neuronal networks in biologically realistic detail. We apply MOR methods to derive lightweight reduced order models and show that they can approximate models of neuronal networks. Reduced order models may thus enable more detailed and large-scale simulations of neuronal systems. We selected several mathematical models that are used in neuronal network simulations, ranging from synaptic signaling to neuronal population models, to use as reduction targets in this thesis. We implemented the models and determined the mathematical requirements for applying MOR to each model. We then identified suitable MOR algorithms for each model and established efficient implementations of our selected methods. Finally, we evaluated the accuracy and speed of our reduced order models. Our studies apply MOR to model types that were not previously reduced using these methods, widening the possibilities for use of MOR in computational neuroscience and deep learning. In summary, the results of this thesis show that MOR can be an effective acceleration strategy for neuronal network models, making it a valuable tool for building large-scale simulations of the brain. MOR methods have the advantage that the reduced model can be used to reconstruct the original detailed model, hence the reduction process does not discard variables or decrease morphological resolution. We identified the Proper Orthogonal Decomposition (POD) combined with Discrete Empirical Interpolation Method (DEIM) as the most suitable tool for reducing our selected models. Additionally, we implemented several recent advanced variants of these methods. The primary obstacle of applying MOR in neuroscience is the nonlinearity of neuronal models, and POD-DEIM can account for that complexity. Extensions of the Balanced Truncation and Iterative Rational Krylov Approximation methods for nonlinear systems also show promise, but have stricter requirements than POD-DEIM with regards to the structure of the original model. Excellent accuracy and acceleration were found when reducing a high-dimensional mean-field model of a neuronal network and chemical reactions in the synapse, using the POD-DEIM method. We also found that a biophysical network, which models action potentials through ionic currents, benefits from the use of adaptive MOR methods that update the reduced model during the model simulation phase. We further show that MOR can be integrated to deep learning networks and that MOR is an effective reduction strategy for convolutional networks, used for example in vision research. Our results validate MOR as a powerful tool for accelerating simulations of nonlinear neuronal networks. Based on the original publications of this thesis, we can conclude that several models and model types of neuronal phenomena that were not previously reduced can be successfully accelerated using MOR methods. In the future, integrating MOR into brain simulation tools will enable faster development of models and extracting new knowledge from numerical studies through improved model efficiency, resolution and scale

    Model reduction of large spiking neurons

    Get PDF
    This thesis introduces and applies model reduction techniques to problems associated with simulation of realistic single neurons. Neurons have complicated dendritic structures and spatially-distributed ionic kinetics that give rise to highly nonlinear dynamics. However, existing model reduction methods compromise the geometry, and thus sacrifice the original input-output relationship. I demonstrate that linear and nonlinear model reduction techniques yield systems that capture the salient dynamics of morphologically accurate neuronal models and preserve the input-output maps while using significantly fewer variables than the full systems. Two main dynamic regimes characterize the voltage response of a neuron, and I demonstrate that different model reduction techniques are well-suited to each regime. Small perturbations from the neuron's rest state fall into the subthreshold regime, which can be accurately described by a linear system. By applying Balanced Truncation (BT), a model reduction technique for general linear systems, I recover subthreshold voltage dynamics, and I provide an efficient Iterative Rational Krylov Algorithm (IRKA), which makes large problems of interest tractable. However, these approximations are not valid once the input to the neuron is sufficient to drive the voltage into the spiking regime, which is characterized by highly nonlinear behavior. To reproduce spiking dynamics, I use a proper orthogonal decomposition (POD) to reduce the number of state variables and a discrete empirical interpolation method (DEIM) to reduce the complexity of the nonlinear terms. The techniques described above are successful, but they inherently assume that the whole neuron is either passive (linear) or active (nonlinear). However, in realistic cells the voltage response at distal locations is nearly linear, while at proximal locations it is very nonlinear. With this observation, I fuse the aforementioned models together to create a reduced coupled model in which each reduction technique is used where it is most advantageous, thereby making it possible to more accurately simulate a larger class of cortical neurons

    Topics in Applied Stochastic Dynamics.

    Full text link
    Randomness in natural systems come from various sources, for example from the discrete nature of the underlying dynamical process when viewed on a small scale. In this thesis we study the effect of stochasticity on the dynamics in three applications, each with different sources and effects of randomness. In the first application we study the Hodgkin-Huxley model of the neuron with a random ion channel mechanism via numerical simulation. Randomness affects the nonlinear mechanism of a neuron’s firing behavior by spike induction as well as by spike suppression. The sensitivity to different types of channel noise is explored and robustness of the dynamical properties is studied using two distinct stochastic models. In the second application we compare and contrast the effectiveness of mixing of a passive scalar by stirring using different notions of mixing efficiency. We explore the non-commutativity of the limits of large Peclet numbers and large spatial scale separation between the flow and sources and sinks, and propose and examine a conceptual approach that captures some compat- ible features of the different models and measures of mixing. In the last application we design a stochastic dynamical system that mimics the properties of so-called ho- mogeneous Rayleigh-Benard convection and show that arbitrary small noise changes the dynamical properties of the model. The system’s properties are further exam- ined using the first exit time problem. The three applications show that randomness of small magnitude may play important and counterintuitive roles in determinig a system’s properties.Ph.D.Applied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/64775/1/kbodova_1.pd

    Brain Dynamics From Mathematical Perspectives: A Study of Neural Patterning

    Get PDF
    The brain is the central hub regulating thought, memory, vision, and many other processes occurring within the body. Neural information transmission occurs through the firing of billions of connected neurons, giving rise to a rich variety of complex patterning. Mathematical models are used alongside direct experimental approaches in understanding the underlying mechanisms at play which drive neural activity, and ultimately, in understanding how the brain works. This thesis focuses on network and continuum models of neural activity, and computational methods used in understanding the rich patterning that arises due to the interplay between non-local coupling and local dynamics. It advances the understanding of patterning in both cortical and sub-cortical domains by utilising the neural field framework in the modelling and analysis of thalamic tissue – where cellular currents are important in shaping the tissue firing response through the post-inhibitory rebound phenomenon – and of cortical tissue. The rich variety of patterning exhibited by different neural field models is demonstrated through a mixture of direct numerical simulation, as well as via a numerical continuation approach and an analytical study of patterned states such as synchrony, spatially extended periodic orbits, bumps, and travelling waves. Linear instability theory about these patterns is developed and used to predict the points at which solutions destabilise and alternative emergent patterns arise. Models of thalamic tissue often exhibit lurching waves, where activity travels across the domain in a saltatory manner. Here, a direct mechanism, showing the birth of lurching waves at a Neimark-Sacker-type instability of the spatially synchronous periodic orbit, is presented. The construction and stability analyses carried out in this thesis employ techniques from non-smooth dynamical systems (such as saltation methods) to treat the Heaviside nature of models. This is often coupled with an Evans function approach to determine the linear stability of patterned states. With the ever-increasing complexity of neural models that are being studied, there is a need to develop ways of systematically studying the non-trivial patterns they exhibit. Computational continuation methods are developed, allowing for such a study of periodic solutions and their stability across different parameter regimes, through the use of Newton-Krylov solvers. These techniques are complementary to those outlined above. Using these methods, the relationship between the speed of synaptic transmission and the emergent properties of periodic and travelling periodic patterns such as standing waves and travelling breathers is studied. Many different dynamical systems models of physical phenomena are amenable to analysis using these general computational methods (as long as they have the property that they are sufficiently smooth), and as such, their domain of applicability extends beyond the realm of mathematical neuroscience

    Brain Dynamics From Mathematical Perspectives: A Study of Neural Patterning

    Get PDF
    The brain is the central hub regulating thought, memory, vision, and many other processes occurring within the body. Neural information transmission occurs through the firing of billions of connected neurons, giving rise to a rich variety of complex patterning. Mathematical models are used alongside direct experimental approaches in understanding the underlying mechanisms at play which drive neural activity, and ultimately, in understanding how the brain works. This thesis focuses on network and continuum models of neural activity, and computational methods used in understanding the rich patterning that arises due to the interplay between non-local coupling and local dynamics. It advances the understanding of patterning in both cortical and sub-cortical domains by utilising the neural field framework in the modelling and analysis of thalamic tissue – where cellular currents are important in shaping the tissue firing response through the post-inhibitory rebound phenomenon – and of cortical tissue. The rich variety of patterning exhibited by different neural field models is demonstrated through a mixture of direct numerical simulation, as well as via a numerical continuation approach and an analytical study of patterned states such as synchrony, spatially extended periodic orbits, bumps, and travelling waves. Linear instability theory about these patterns is developed and used to predict the points at which solutions destabilise and alternative emergent patterns arise. Models of thalamic tissue often exhibit lurching waves, where activity travels across the domain in a saltatory manner. Here, a direct mechanism, showing the birth of lurching waves at a Neimark-Sacker-type instability of the spatially synchronous periodic orbit, is presented. The construction and stability analyses carried out in this thesis employ techniques from non-smooth dynamical systems (such as saltation methods) to treat the Heaviside nature of models. This is often coupled with an Evans function approach to determine the linear stability of patterned states. With the ever-increasing complexity of neural models that are being studied, there is a need to develop ways of systematically studying the non-trivial patterns they exhibit. Computational continuation methods are developed, allowing for such a study of periodic solutions and their stability across different parameter regimes, through the use of Newton-Krylov solvers. These techniques are complementary to those outlined above. Using these methods, the relationship between the speed of synaptic transmission and the emergent properties of periodic and travelling periodic patterns such as standing waves and travelling breathers is studied. Many different dynamical systems models of physical phenomena are amenable to analysis using these general computational methods (as long as they have the property that they are sufficiently smooth), and as such, their domain of applicability extends beyond the realm of mathematical neuroscience

    Patient-specific modellling of cortical spreading depression applied to migraine studies.

    Get PDF
    254 p.-La migraña es un trastorno neurológico muy común. Un tercio de los pacientes que sufren migraña experimentan lo que se denomina aura, una serie de alteraciones sensoriales que preceden al típico dolor de cabeza unilateral. Diversos estudios apuntan a la existencia de una correlación entre el aura visual y la depresión cortical propagada (DCP), una onda de despolarización que tiene su origen en el córtex visual para propagarse, a continuación, por todo el córtex hacia las zonas periféricas. La complejidad y la elevada especificidad de las características del córtex cerebral sugieren que la geometría podría tener un impacto significativo en la propagación de la DCP. En esta tesis hemos combinado dos modelos existentes: un modelo neurológico pormenorizado para el componente electrofisiológico de la DCP y un modelo de reacción-difusión que tiene en consideración la difusión del potasio, el impulsor de la propagación de la DCP. Durante el proceso, hemos integrado dos aspectos de la DCP que tienen lugar en diferentes escalas de tiempo: la dinámica electrofisiológica seguiría un patrón temporal del orden de milisegundos, mientras que la dinámica del potasio extracelular que acciona las funciones de propagación de la DCP se mediría en una escala de minutos. Como resultado, obtendremos un modelo multiescalar EDP-EDO. Asimismo, hemos incorporado los datos específicos del paciente en el modelo DCP: (i) la geometría cerebral específica de un paciente obtenida a través de resonancia magnética, y (ii) los tensores de conductividad personalizados obtenidos a través de diffusion tensor images. A fin de estudiar el papel que desempeña la geometría en la propagación de la DCP, hemos definido las cantidades de interés (CdI) relacionadas con la geometría y las que dependen de la DCP y las hemos evaluado en dos casos prácticos. Si bien la geometría no parece tener un impacto significativo en la propagación de la DCP, algunas CdI han resultado ser unas candidatas muy prometedoras para facilitar la clasificación de individuos sanos y pacientes con migraña. Finalmente, para justificar la carencia de datos experimentales para la validación y selección de los parámetros del modelo, hemos aplicado diversas técnicas de cuantificación de la incertidumbre al modelo DCP y hemos analizado el impacto de las diversas elecciones de parámetros en el resultado del modelo.Migraine is a common neurological disorder and one-third of migraine patients suffer from migraine aura, a perceptual disturbance preceding the typically unilateral headache. Cortical spreading depression (CSD), a depolarisation wave that originates in the visual cortex and propagates across the cortex to the peripheral areas, has been suggested as a correlate of visual aura by several studies. The complex and highly individual-specific characteristics of the brain cortex suggest that the geometry might have a significant impact on CSD propagation. In this thesis, we combine two existing models, a detailed neurological model for the electrophysiological component of CSD and a reaction-diffusion model accounting for the potassium diffusion, the driving force of CSD propagation. In the process, we integrate two aspects of CSD that occur at different time scales: the electrophysiological dynamics features a temporal scale in the order of milliseconds, while the extracellular potassium dynamics that triggers CSD propagation features is on the scale of minutes. As a result we obtain a multi-scale PDE-ODE model. In addition, we incorporate patient-specific data in the CSD model: (i) a patient-specific brain geometry obtained from magnetic resonance imaging, and (ii) personalised conductivity tensors derived from diffusion tensor imaging data. To study the role of the geometry in CSD propagation, we define geometric and CSD-dependent quantities of interest (QoI) that we evaluate in two case studies. Even though the geometry does not seem to have a major impact on the CSD propagation, some QoI are promising candidates to aid in the classification of healthy individuals and migraine patients. Finally, to account for the lack of experimental data for validation and selection of the model parameters, we apply different techniques of uncertainty quantification to the CSD model and analyse the impact of various parameter choices on the model outcom

    Patient-specific modelling of cortical spreading depression applied to migraine studies

    Get PDF
    Migraine is a common neurological disorder and one-third of migraine patients suffer from migraine aura, a perceptual disturbance preceding the typically unilateral headache. Cortical spreading depression (CSD), a depolarisation wave that originates in the visual cortex and propagates across the cortex to the peripheral areas, has been suggested as a correlate of visual aura by several studies. The complex and highly individual-specific characteristics of the brain cortex suggest that the geometry might have a significant impact on CSD propagation. In this thesis, we combine two existing models, a detailed neurological model for the electrophysiological component of CSD and a reaction-diffusion model accounting for the potassium diffusion, the driving force of CSD propagation. In the process, we integrate two aspects of CSD that occur at different time scales: the electrophysiological dynamics features a temporal scale in the order of milliseconds, while the extracellular potassium dynamics that triggers CSD propagation features is on the scale of minutes. As a result we obtain a multi-scale PDE-ODE model. In addition, we incorporate patient-specific data in the CSD model: (i) a patient-specific brain geometry obtained from magnetic resonance imaging, and (ii) personalised conductivity tensors derived from diffusion tensor imaging data. To study the role of the geometry in CSD propagation, we define geometric and CSD-dependent quantities of interest (QoI) that we evaluate in two case studies. Even though the geometry does not seem to have a major impact on the CSD propagation, some QoI are promising candidates to aid in the classification of healthy individuals and migraine patients. Finally, to account for the lack of experimental data for validation and selection of the model parameters, we apply different techniques of uncertainty quantification to the CSD model and analyse the impact of various parameter choices on the model outcome

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described
    corecore