9 research outputs found

    Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes

    Full text link
    There is mounting experimental evidence that brain-state specific neural mechanisms supported by connectomic architectures serve to combine past and contextual knowledge with current, incoming flow of evidence (e.g. from sensory systems). Such mechanisms are distributed across multiple spatial and temporal scales and require dedicated support at the levels of individual neurons and synapses. A prominent feature in the neocortex is the structure of large, deep pyramidal neurons which show a peculiar separation between an apical dendritic compartment and a basal dentritic/peri-somatic compartment, with distinctive patterns of incoming connections and brain-state specific activation mechanisms, namely apical-amplification, -isolation and -drive associated to the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning spiking networks are based on single compartment neurons that miss the description of mechanisms to combine apical and basal/somatic information. This work aims to provide the computational community with a two-compartment spiking neuron model which includes features that are essential for supporting brain-state specific learning and with a piece-wise linear transfer function (ThetaPlanes) at highest abstraction level to be used in large scale bio-inspired artificial intelligence systems. A machine learning algorithm, constrained by a set of fitness functions, selected the parameters defining neurons expressing the desired apical mechanisms.Comment: 19 pages, 38 figures, pape

    The Neural Computations of Spatial Memory from Single Cells to Networks

    Get PDF
    Studies of spatial memory provide valuable insight into more general mnemonic functions, for by observing the activity of cells such as place cells, one can follow a subject’s dynamic representation of a changing environment. I investigate how place cells resolve conflicting neuronal input signals by developing computational models that integrate synaptic inputs on two scales. First, I construct reduced models of morphologically accurate neurons that preserve neuronal structure and the spatial specificity of inputs. Second, I use a parallel implementation to examine the dynamics among a network of interconnected place cells. Both models elucidate possible roles for the inputs and mechanisms involved in spatial memory

    Analysis of network models with neuron-astrocyte interactions

    Get PDF
    publishedVersionPeer reviewe

    Modelling and analysis of cortico-hippocampal interactions and dynamics during sleep and anaesthesia

    Get PDF
    The standard memory consolidation model assumes that new memories are temporarily stored in the hippocampus and later transferred to the neocortex, during deep sleep, for long-term storage, signifying the importance of studying functional and structural cortico-hippocampal interactions. Our work offers a thorough analysis on such interactions between neocortex and hippocampus, along with a detailed study of their intrinsic dynamics, from two complementary perspectives: statistical data analysis and computational modelling. The first part of this study reviews mathematical tools for assessing directional interactions in multivariate time series. We focus on the notion of Granger Causality and the related measure of generalised Partial Directed Coherence (gPDC) which we then apply, through a custom built numerical package, to electrophysiological data from the medial prefrontal cortex (mPFC) and hippocampus of anaesthetized rats. Our gPDC analysis reveals a clear lateral-to-medial hippocampus connectivity and suggests a reciprocal information flow between mPFC and hippocampus, altered during cortical activity. The second part deals with modelling sleep-related intrinsic rhythmic dynamics of the two areas, and examining their coupling. We first reproduce a computational model of the cortical slow oscillation, a periodic alteration between activated (UP) states and neuronal silence. We then develop a new spiking network model of hippocampal areas CA3 and CA1, reproducing many of their intrinsic dynamics and exhibiting sharp wave-ripple complexes, suggesting a novel mechanism for their generation based on CA1 interneuronal activity and recurrent inhibition. We finally couple the two models to study interactions between the slow oscillation and hippocampal activity. Our simulations propose a dependence of the correlation between UP states and hippocampal spiking on the excitation-to-inhibition ratio induced by the mossy fibre input to CA3 and by a combination of the Schaffer collateral and temporoammonic input to CA1. These inputs are shown to affect reported correlations between UP states and ripples

    Model Order Reduction for Modeling the Brain

    Get PDF
    Tässä väitöskirjassa tutkimme Model Order Reduction (MOR) -menetelmien käyttöä aivosimulaatioiden vaatimien laskentaresurssien pienentämiseksi ja laskenta-ajan nopeuttamiseksi. Matemaattinen mallintaminen ja numeeriset menetelmät, kuten simulaatiot, ovat tärkeimpiä työkaluja laskennallisessa neurotieteessä, jossa pyritään ymmärtämään aivojen toimintaa dataa ja teoriaa yhdistämällä. Aivosolujen ja niiden muodostamien soluverkostojen monimutkaisuudesta johtuen tietokonesimulaatiot eivät voi sisältää kaikkia biologisesti realistisia yksityiskohtia. MOR-menetelmiä käyttäen johdamme redusoituja malleja ja näytämme, että niillä on mahdollista approksimoida hermosoluverkostomalleja. Redusoidut mallit saattavat mahdollistaa entistä tarkempien tai suuren mittakaavan hermosoluverkostojen simulaatiot. Valitsimme tähän tutkimukseen redusoinnin kohteiksi useita neurotieteessä rele- vantteja matemaattisia malleja, alkaen synaptisesta viestinnästä aivojen populaatiotason malleihin. Simuloimme malleja numeerisesti ja määritimme matemaattiset vaatimukset MOR-menetelmien soveltamiseksi jokaiseen malliin. Seuraavaksi tunnistimme kullekin mallille sopivat MOR-algoritmit ja toteutimme valitsemamme menetelmät laskennallisesti tehokkaalla tavalla. Lopuksi arvioimme redusoitujen mallien tarkkuutta ja nopeutta. Tutkimuksemme soveltavat MOR-menetelmiä mallityyppeihin, joita ei ole aiemmin tutkittu kyseisillä menetelmillä, laajentaen mahdollisuuksia MORin käyttöön laskennallisessa neurotieteessä sekä myös koneoppimisessa. Tutkimuksemme osoittavat, että MOR voi olla tehokas nopeutusstrategia hermosoluverkostomalleille ja keinotekoisille neuroverkoille, mikä tekee siitä arvokkaan työkalun aivojen laskennallisessa tutkimuksessa. MOR-menetelmät ovat hyödyllisiä, sillä redusoidun mallin perusteella on mahdollista rekonstruoida alkuperäinen malli. Redusointi ei poista mallista muuttujia tai heikennä sen morfologista resoluutiota. Tunnistimme Proper Orthogonal Decom- position (POD) -menetelmän yhdistettynä Discrete Empirical Interpolation Method (DEIM) -algoritmiin sopivaksi menetelmäksi valitsemiemme mallien redusointiin. Lisäksi otimme käyttöön useita viimeaikaisia edistyneitä muunnelmia näistä menetel-mistä. Ensisijainen este MOR-menetelmien soveltamiselle neurotieteessä on hermosolumallien epälineaarisuus. POD-DEIM -menetelmää voidaan käyttää myös epälineaaristen mallien redusointiin. Balanced Truncation ja Iterative Rational Krylovin Approximation -menetelmien muunnelmat epälineaaristen mallien approksimoin- tiin ovat myös lupaavia, mutta niiden käyttö vaatii redusoitavalta mallilta enemmän matemaattisia ominaisuuksia verrattuna POD-DEIM -menetelmiin. Saavutimme erinomaisen approksimaatiotarkkuuden ja nopeutuksen redusoimalla moniulotteista hermosolupopulaatiomallia ja synapsin kemiallisia reaktioita kuvaavaa mallia käyttämällä POD-DEIM -menetelmää. Biofysikaalisesti tarkan verkosto- mallin, joka kuvaa aktiopotentiaalin muodostumista ionivirtojen kautta, redusoinnin huomattiin hyötyvän simulaation aikana redusoitua mallia päivittävien MOR- menetelmien käytöstä. Osoitimme lisäksi, että MOR voidaan integroida syväoppimisverkkoihin ja että MOR on tehokas redusointistrategia konvoluutioverkkoihin, joita käytetään esimerkiksi näköhermoston tutkimuksessa. Tuloksemme osoittavat, että MOR on tehokas työkalu epälineaaristen hermo- soluverkostojen simulaatioiden nopeuttamiseen. Tämän väitöskirjan osajulkaisujen perusteella voimme todeta, että useita neurotieteellisesti relevantteja malleja ja mallityyppejä, joita ei ole aiemmin redusoitu, voidaan nopeuttaa käyttämällä MOR- menetelmiä. Tulevaisuudessa MOR-menetelmien integrointi aivosimulaatiotyökaluihin mahdollistaa mallien nopeamman kehittämisen ja uuden tiedon luomisen numeeristen simulaatioiden tehokkuutta, resoluutiota ja mittakaavaa parantamalla.In this thesis, we study the use of Model Order Reduction (MOR) methods for accelerating and reducing the computational burden of brain simulations. Mathematical modeling and numerical simulations are the primary tools of computational neuroscience, a field that strives to understand the brain by combining data and theories. Due to the complexity of brain cells and the neuronal networks they form, computer simulations cannot consider neuronal networks in biologically realistic detail. We apply MOR methods to derive lightweight reduced order models and show that they can approximate models of neuronal networks. Reduced order models may thus enable more detailed and large-scale simulations of neuronal systems. We selected several mathematical models that are used in neuronal network simulations, ranging from synaptic signaling to neuronal population models, to use as reduction targets in this thesis. We implemented the models and determined the mathematical requirements for applying MOR to each model. We then identified suitable MOR algorithms for each model and established efficient implementations of our selected methods. Finally, we evaluated the accuracy and speed of our reduced order models. Our studies apply MOR to model types that were not previously reduced using these methods, widening the possibilities for use of MOR in computational neuroscience and deep learning. In summary, the results of this thesis show that MOR can be an effective acceleration strategy for neuronal network models, making it a valuable tool for building large-scale simulations of the brain. MOR methods have the advantage that the reduced model can be used to reconstruct the original detailed model, hence the reduction process does not discard variables or decrease morphological resolution. We identified the Proper Orthogonal Decomposition (POD) combined with Discrete Empirical Interpolation Method (DEIM) as the most suitable tool for reducing our selected models. Additionally, we implemented several recent advanced variants of these methods. The primary obstacle of applying MOR in neuroscience is the nonlinearity of neuronal models, and POD-DEIM can account for that complexity. Extensions of the Balanced Truncation and Iterative Rational Krylov Approximation methods for nonlinear systems also show promise, but have stricter requirements than POD-DEIM with regards to the structure of the original model. Excellent accuracy and acceleration were found when reducing a high-dimensional mean-field model of a neuronal network and chemical reactions in the synapse, using the POD-DEIM method. We also found that a biophysical network, which models action potentials through ionic currents, benefits from the use of adaptive MOR methods that update the reduced model during the model simulation phase. We further show that MOR can be integrated to deep learning networks and that MOR is an effective reduction strategy for convolutional networks, used for example in vision research. Our results validate MOR as a powerful tool for accelerating simulations of nonlinear neuronal networks. Based on the original publications of this thesis, we can conclude that several models and model types of neuronal phenomena that were not previously reduced can be successfully accelerated using MOR methods. In the future, integrating MOR into brain simulation tools will enable faster development of models and extracting new knowledge from numerical studies through improved model efficiency, resolution and scale

    Deciphering the brainstem, hippocampal and whole-brain dynamics by neuronal-ensemble event signatures

    Get PDF
    Intracortically-recorded brain signals display a rich variety of such transient activities: brief, recurring episodes of deflection or oscillatory activities that reflect cooperative neural circuit mechanisms. These network patterns of activity, also called neural events, span multiple spatio-temporal scales, and are believed to be basic computing elements during cognitive processes such as learning and off-line memory consolidation. However, both the large-scale and microscopic-scale cooperative mechanisms associated with these episodes remain poorly understood. This knowledge gap arises partly due to methodological limitations of existing experimental approaches, specifically in measuring simultaneous micro- and macroscopic aspects of neuronal activity in the brain. Therefore, this dissertation sought to study the relationship between ongoing spontaneous neural events in the hippocampus, brainstem and thalamic structures at micro-, meso- and macroscopic scales by combining data from intracortical recordings, multi-compartmental network models, and functional magnetic resonance imaging (fMRI)

    Modelling and analysis of cortico-hippocampal interactions and dynamics during sleep and anaesthesia

    Get PDF
    The standard memory consolidation model assumes that new memories are temporarily stored in the hippocampus and later transferred to the neocortex, during deep sleep, for long-term storage, signifying the importance of studying functional and structural cortico-hippocampal interactions. Our work offers a thorough analysis on such interactions between neocortex and hippocampus, along with a detailed study of their intrinsic dynamics, from two complementary perspectives: statistical data analysis and computational modelling. The first part of this study reviews mathematical tools for assessing directional interactions in multivariate time series. We focus on the notion of Granger Causality and the related measure of generalised Partial Directed Coherence (gPDC) which we then apply, through a custom built numerical package, to electrophysiological data from the medial prefrontal cortex (mPFC) and hippocampus of anaesthetized rats. Our gPDC analysis reveals a clear lateral-to-medial hippocampus connectivity and suggests a reciprocal information flow between mPFC and hippocampus, altered during cortical activity. The second part deals with modelling sleep-related intrinsic rhythmic dynamics of the two areas, and examining their coupling. We first reproduce a computational model of the cortical slow oscillation, a periodic alteration between activated (UP) states and neuronal silence. We then develop a new spiking network model of hippocampal areas CA3 and CA1, reproducing many of their intrinsic dynamics and exhibiting sharp wave-ripple complexes, suggesting a novel mechanism for their generation based on CA1 interneuronal activity and recurrent inhibition. We finally couple the two models to study interactions between the slow oscillation and hippocampal activity. Our simulations propose a dependence of the correlation between UP states and hippocampal spiking on the excitation-to-inhibition ratio induced by the mossy fibre input to CA3 and by a combination of the Schaffer collateral and temporoammonic input to CA1. These inputs are shown to affect reported correlations between UP states and ripples

    Digital Implementation of the Two-Compartmental Pinsky-Rinzel Pyramidal Neuron Model

    No full text
    It is believed that brain-like computing system can be achieved by the fusion of electronics and neuroscience. In this way, the optimized digital hardware implementation of neurons, primary units of nervous system, play a vital role in neuromorphic applications. Moreover, one of the main features of pyramidal neurons in cortical areas is bursting activities that has a critical role in synaptic plasticity. The Pinsky-Rinzel model is a nonlinear two-compartmental model for CA3 pyramidal cell that is widely used in neuroscience. In this paper, a modified Pinsky-Rinzel pyramidal model is proposed by replacing its complex nonlinear equations with piecewise linear approximation. Next, a digital circuit is designed for the simplified model to be able to implement on a low-cost digital hardware, such as field-programmable gate array (FPGA). Both original and proposed models are simulated in MATLAB and next digital circuit simulated in Vivado is compared to show that obtained results are in good agreement. Finally, the results of physical implementation on FPGA are also illustrated. The presented circuit advances preceding designs with regards to the ability to replicate essential characteristics of different firing responses including bursting and spiking in the compartmental model. This new circuit has various applications in neuromorphic engineering, such as developing new neuroinspired chips.Peer Reviewe
    corecore