593 research outputs found

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Inhomogeneous sparseness leads to dynamic instability during sequence memory recall in a recurrent neural network model.

    Get PDF
    Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns

    Inhomogeneous sparseness leads to dynamic instability during sequence memory recall in a recurrent neural network model.

    Get PDF
    Theoretical models of associative memory generally assume most of their parameters to be homogeneous across the network. Conversely, biological neural networks exhibit high variability of structural as well as activity parameters. In this paper, we extend the classical clipped learning rule by Willshaw to networks with inhomogeneous sparseness, i.e., the number of active neurons may vary across memory items. We evaluate this learning rule for sequence memory networks with instantaneous feedback inhibition and show that little surprisingly, memory capacity degrades with increased variability in sparseness. The loss of capacity, however, is very small for short sequences of less than about 10 associations. Most interestingly, we further show that, due to feedback inhibition, too large patterns are much less detrimental for memory capacity than too small patterns

    Towards a continuous dynamic model of the Hopfield theory on neuronal interaction and memory storage

    Get PDF
    The purpose of this work is to study the Hopfield model for neuronal interaction and memory storage, in particular the convergence to the stored patterns. Since the hypothesis of symmetric synapses is not true for the brain, we will study how we can extend it to the case of asymmetric synapses using a probabilistic approach. We then focus on the description of another feature of the memory process and brain: oscillations. Using the Kuramoto model we will be able to describe them completely, gaining the presence of synchronization between neurons. Our aim is therefore to understand how and why neurons can be seen as oscillators and to establish a strong link between this model and the Hopfield approach

    Linear and nonlinear approaches to unravel dynamics and connectivity in neuronal cultures

    Get PDF
    [eng] In the present thesis, we propose to explore neuronal circuits at the mesoscale, an approach in which one monitors small populations of few thousand neurons and concentrates in the emergence of collective behavior. In our case, we carried out such an exploration both experimentally and numerically, and by adopting an analysis perspective centered on time series analysis and dynamical systems. Experimentally, we used neuronal cultures and prepared more than 200 of them, which were monitored using fluorescence calcium imaging. By adjusting the experimental conditions, we could set two basic arrangements of neurons, namely homogeneous and aggregated. In the experiments, we carried out two major explorations, namely development and disintegration. In the former we investigated changes in network behavior as it matured; in the latter we applied a drug that reduced neuronal interconnectivity. All the subsequent analyses and modeling along the thesis are based on these experimental data. Numerically, the thesis comprised two aspects. The first one was oriented towards a simulation of neuronal connectivity and dynamics. The second one was oriented towards the development of linear and nonlinear analysis tools to unravel dynamic and connectivity aspects of the measured experimental networks. For the first aspect, we developed a sophisticated software package to simulate single neuronal dynamics using a quadratic integrate–and–fire model with adaptation and depression. This model was plug into a synthetic graph in which the nodes of the network are neurons, and the edges connections. The graph was created using spatial embedding and realistic biology. We carried out hundreds of simulations in which we tuned the density of neurons, their spatial arrangement and the characteristics of the fluorescence signal. As a key result, we observed that homogeneous networks required a substantial number of neurons to fire and exhibit collective dynamics, and that the presence of aggregation significantly reduced the number of required neurons. For the second aspect, data analysis, we analyzed experiments and simulations to tackle three major aspects: network dynamics reconstruction using linear descriptions, dynamics reconstruction using nonlinear descriptors, and the assessment of neuronal connectivity from solely activity data. For the linear study, we analyzed all experiments using the power spectrum density (PSD), and observed that it was sufficiently good to describe the development of the network or its disintegration. PSD also allowed us to distinguish between healthy and unhealthy networks, and revealed dynamical heterogeneities across the network. For the nonlinear study, we used techniques in the context of recurrence plots. We first characterized the embedding dimension m and the time delay δ for each experiment, built the respective recurrence plots, and extracted key information of the dynamics of the system through different descriptors. Experimental results were contrasted with numerical simulations. After analyzing about 400 time series, we concluded that the degree of dynamical complexity in neuronal cultures changes both during development and disintegration. We also observed that the healthier the culture, the higher its dynamic complexity. Finally, for the reconstruction study, we first used numerical simulations to determine the best measure of ‘statistical interdependence’ among any two neurons, and took Generalized Transfer Entropy. We then analyzed the experimental data. We concluded that young cultures have a weak connectivity that increases along maturation. Aggregation increases average connectivity, and more interesting, also the assortativity, i.e. the tendency of highly connected nodes to connect with other highly connected node. In turn, this assortativity may delineates important aspects of the dynamics of the network. Overall, the results show that spatial arrangement and neuronal dynamics are able to shape a very rich repertoire of dynamical states of varying complexity.[cat] L’habilitat dels teixits neuronals de processar i transmetre informació de forma eficient depèn de les propietats dinàmiques intrínseques de les neurones i de la connectivitat entre elles. La present tesi proposa explorar diferents tècniques experimentals i de simulació per analitzar la dinàmica i connectivitat de xarxes neuronals corticals de rata embrionària. Experimentalment, la gravació de l’activitat espontània d’una població de neurones en cultiu, mitjançant una càmera ràpida i tècniques de fluorescència, possibilita el seguiment de forma controlada de l’activitat individual de cada neurona, així com la modificació de la seva connectivitat. En conjunt, aquestes eines permeten estudiar el comportament col.lectiu emergent de la població neuronal. Amb l’objectiu de simular els patrons observats en el laboratori, hem implementat un model mètric aleatori de creixement neuronal per simular la xarxa física de connexions entre neurones, i un model quadràtic d’integració i dispar amb adaptació i depressió per modelar l’ampli espectre de dinàmiques neuronals amb un cost computacional reduït. Hem caracteritzat la dinàmica global i individual de les neurones i l’hem correlacionat amb la seva estructura subjacent mitjançant tècniques lineals i no–lineals de series temporals. L’anàlisi espectral ens ha possibilitat la descripció del desenvolupament i els canvis en connectivitat en els cultius, així com la diferenciació entre cultius sans dels patològics. La reconstrucció de la dinàmica subjacent mitjançant mètodes d’incrustació i l’ús de gràfics de recurrència ens ha permès detectar diferents transicions dinàmiques amb el corresponent guany o pèrdua de la complexitat i riquesa dinàmica del cultiu durant els diferents estudis experimentals. Finalment, a fi de reconstruir la connectivitat interna hem testejat, mitjançant simulacions, diferents quantificadors per mesurar la dependència estadística entre neurona i neurona, seleccionant finalment el mètode de transferència d’entropia gereralitzada. Seguidament, hem procedit a caracteritzar les xarxes amb diferents paràmetres. Malgrat presentar certs tres de xarxes tipus ‘petit món’, els nostres cultius mostren una distribució de grau ‘exponencial’ o ‘esbiaixada’ per, respectivament, cultius joves i madurs. Addicionalment, hem observat que les xarxes homogènies presenten la propietat de disassortativitat, mentre que xarxes amb un creixent nivell d’agregació espaial presenten assortativitat. Aquesta propietat impacta fortament en la transmissió, resistència i sincronització de la xarxa

    Identifying Network Correlates of Memory Consolidation

    Full text link
    Neuronal spiking activity carries information about our experiences in the waking world but exactly how the brain can quickly and efficiently encode sensory information into a useful neural code and then subsequently consolidate that information into memory remains a mystery. While neuronal networks are known to play a vital role in these processes, detangling the properties of network activity from the complex spiking dynamics observed is a formidable challenge, requiring collaborations across scientific disciplines. In this work, I outline my contributions in computational modeling and data analysis toward understanding how network dynamics facilitate memory consolidation. For experimental perspective, I investigate hippocampal recordings of mice that are subjected to contextual fear conditioning and subsequently undergo sleep-dependent fear memory consolidation. First, I outline the development of a functional connectivity algorithm which rapidly and robustly assesses network structure based on neuronal spike timing. I show that the relative stability of these functional networks can be used to identify global network dynamics, revealing that an increase in functional network stability correlates with successful fear memory consolidation in vivo. Using an attractor-based model to simulate memory encoding and consolidation, I go on to show that dynamics associated with a second-order phase transition, at a critical point in phase-space, are necessary for recruiting additional neurons into network dynamics associated with memory consolidation. I show that successful consolidation subsequently shifts dynamics away from a critical point and towards sub-critical dynamics. Investigations of in vivo spiking dynamics likewise revealed that hippocampal dynamics during non-rapid-eye-movement (NREM) sleep show features of being near a critical point and that fear memory consolidation leads to a shift in dynamics. Finally, I investigate the role of NREM sleep in facilitating memory consolidation using a conductance-based model of neuronal activity that can easily switch between modes of activity loosely representing waking and NREM sleep. Analysis of model simulations revealed that oscillations associated with NREM sleep promote a phase-based coding of information; neurons with high firing rates during periods of wake lead spiking activity during NREM oscillations. I show that when phase-coding is active in both simulations and in vivo, synaptic plasticity selectively strengthens the input to neurons firing late in the oscillation while simultaneously reducing input to neurons firing early in the oscillation. The effect is a net homogenization of firing rates observed in multiple other studies, and subsequently leads to recruitment of new neurons into a memory engram and information transfer from fast firing neurons to slow firing neurons. Taken together, my work outlines important, newly-discovered features of neuronal network dynamics related to memory encoding and consolidation: networks near criticality promote recruitment of additional neurons into stable firing patterns through NREM-associated oscillations and subsequently consolidates information into memories through phase-based coding.PHDBiophysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162991/1/qmskill_1.pd

    Introducing numerical bounds to improve event-based neural network simulation

    Get PDF
    Although the spike-trains in neural networks are mainly constrained by the neural dynamics itself, global temporal constraints (refractoriness, time precision, propagation delays, ..) are also to be taken into account. These constraints are revisited in this paper in order to use them in event-based simulation paradigms. We first review these constraints, and discuss their consequences at the simulation level, showing how event-based simulation of time-constrained networks can be simplified in this context: the underlying data-structures are strongly simplified, while event-based and clock-based mechanisms can be easily mixed. These ideas are applied to punctual conductance-based generalized integrate-and-fire neural networks simulation, while spike-response model simulations are also revisited within this framework. As an outcome, a fast minimal complementary alternative with respect to existing simulation event-based methods, with the possibility to simulate interesting neuron models is implemented and experimented.Comment: submitte

    Two photon interrogation of hippocampal subregions CA1 and CA3 during spatial behaviour

    Get PDF
    The hippocampus is crucial for spatial navigation and episodic memory formation. Hippocampal place cells exhibit spatially selective activity within an environment and form the neural basis of a cognitive map of space which supports these mnemonic functions. Hebb’s (1949) postulate regarding the creation of cell assemblies is seen as the pre-eminent model of learning in neural systems. Investigating changes to the hippocampal representation of space during an animal’s exploration of its environment provides an opportunity to observe Hebbian learning at the population and single cell level. When exploring new environments animals form spatial memories that are updated with experience and retrieved upon re-exposure to the same environment, but how this is achieved by different subnetworks in hippocampal CA1 and CA3, and how these circuits encode distinct memories of similar objects and events remains unclear. To test these ideas, we developed an experimental strategy and detailed protocols for simultaneously recording from CA1 and CA3 populations with 2P imaging. We also developed a novel all-optical protocol to simultaneously activate and record from ensembles of CA3 neurons. We used these approaches to show that targeted activation of CA3 neurons results in an increasing excitatory amplification seen only in CA3 cells when stimulating other CA3 cells, and not in CA1, perhaps reflecting the greater number of recurrent connections in CA3. To probe hippocampal spatial representations, we titrated input to the network by morphing VR environments during spatial navigation to assess the local CA3 as well as downstream CA1 responses. To this end, we found CA1 and CA3 neural population responses behave nonlinearly, consistent with attractor dynamics associated with the two stored representations. We interpret our findings as supporting classic theories of Hebbian learning and as the beginning of uncovering the relationship between hippocampal neural circuit activity and the computations implemented by their dynamics. Establishing this relationship is paramount to demystifying the neural underpinnings of cognition
    • …
    corecore