1,075 research outputs found

    Identifying Network Correlates of Memory Consolidation

    Full text link
    Neuronal spiking activity carries information about our experiences in the waking world but exactly how the brain can quickly and efficiently encode sensory information into a useful neural code and then subsequently consolidate that information into memory remains a mystery. While neuronal networks are known to play a vital role in these processes, detangling the properties of network activity from the complex spiking dynamics observed is a formidable challenge, requiring collaborations across scientific disciplines. In this work, I outline my contributions in computational modeling and data analysis toward understanding how network dynamics facilitate memory consolidation. For experimental perspective, I investigate hippocampal recordings of mice that are subjected to contextual fear conditioning and subsequently undergo sleep-dependent fear memory consolidation. First, I outline the development of a functional connectivity algorithm which rapidly and robustly assesses network structure based on neuronal spike timing. I show that the relative stability of these functional networks can be used to identify global network dynamics, revealing that an increase in functional network stability correlates with successful fear memory consolidation in vivo. Using an attractor-based model to simulate memory encoding and consolidation, I go on to show that dynamics associated with a second-order phase transition, at a critical point in phase-space, are necessary for recruiting additional neurons into network dynamics associated with memory consolidation. I show that successful consolidation subsequently shifts dynamics away from a critical point and towards sub-critical dynamics. Investigations of in vivo spiking dynamics likewise revealed that hippocampal dynamics during non-rapid-eye-movement (NREM) sleep show features of being near a critical point and that fear memory consolidation leads to a shift in dynamics. Finally, I investigate the role of NREM sleep in facilitating memory consolidation using a conductance-based model of neuronal activity that can easily switch between modes of activity loosely representing waking and NREM sleep. Analysis of model simulations revealed that oscillations associated with NREM sleep promote a phase-based coding of information; neurons with high firing rates during periods of wake lead spiking activity during NREM oscillations. I show that when phase-coding is active in both simulations and in vivo, synaptic plasticity selectively strengthens the input to neurons firing late in the oscillation while simultaneously reducing input to neurons firing early in the oscillation. The effect is a net homogenization of firing rates observed in multiple other studies, and subsequently leads to recruitment of new neurons into a memory engram and information transfer from fast firing neurons to slow firing neurons. Taken together, my work outlines important, newly-discovered features of neuronal network dynamics related to memory encoding and consolidation: networks near criticality promote recruitment of additional neurons into stable firing patterns through NREM-associated oscillations and subsequently consolidates information into memories through phase-based coding.PHDBiophysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162991/1/qmskill_1.pd

    Contributions to models of single neuron computation in striatum and cortex

    Get PDF
    A deeper understanding is required of how a single neuron utilizes its nonlinear subcellular devices to generate complex neuronal dynamics. Two compartmental models of cortex and striatum are accurately formulated and firmly grounded in the experimental reality of electrophysiology to address the questions: how striatal projection neurons implement location-dependent dendritic integration to carry out association-based computation and how cortical pyramidal neurons strategically exploit the type and location of synaptic contacts to enrich its computational capacities.Neuronale Zellen transformieren kontinuierliche Signale in diskrete Zeitserien von Aktionspotentialen und kodieren damit Perzeptionen und interne Zustände. Kompartiment-Modelle werden formuliert von Nervenzellen im Kortex und Striatum, die elektrophysiologisch fundiert sind, um spezifische Fragen zu adressieren: i) Inwiefern implementieren Projektionen vom Striatum ortsabhängige dendritische Integration, um Assoziationens-basierte Berechnungen zu realisieren? ii) Inwiefern nutzen kortikale Zellen den Typ und den Ort, um die durch sie realisierten Berechnungen zu optimieren

    The influence of dopamine on prediction, action and learning

    Get PDF
    In this thesis I explore functions of the neuromodulator dopamine in the context of autonomous learning and behaviour. I first investigate dopaminergic influence within a simulated agent-based model, demonstrating how modulation of synaptic plasticity can enable reward-mediated learning that is both adaptive and self-limiting. I describe how this mechanism is driven by the dynamics of agentenvironment interaction and consequently suggest roles for both complex spontaneous neuronal activity and specific neuroanatomy in the expression of early, exploratory behaviour. I then show how the observed response of dopamine neurons in the mammalian basal ganglia may also be modelled by similar processes involving dopaminergic neuromodulation and cortical spike-pattern representation within an architecture of counteracting excitatory and inhibitory neural pathways, reflecting gross mammalian neuroanatomy. Significantly, I demonstrate how combined modulation of synaptic plasticity and neuronal excitability enables specific (timely) spike-patterns to be recognised and selectively responded to by efferent neural populations, therefore providing a novel spike-timing based implementation of the hypothetical ‘serial-compound’ representation suggested by temporal difference learning. I subsequently discuss more recent work, focused upon modelling those complex spike-patterns observed in cortex. Here, I describe neural features likely to contribute to the expression of such activity and subsequently present novel simulation software allowing for interactive exploration of these factors, in a more comprehensive neural model that implements both dynamical synapses and dopaminergic neuromodulation. I conclude by describing how the work presented ultimately suggests an integrated theory of autonomous learning, in which direct coupling of agent and environment supports a predictive coding mechanism, bootstrapped in early development by a more fundamental process of trial-and-error learning

    Self Organisation and Hierarchical Concept Representation in Networks of Spiking Neurons

    Get PDF
    The aim of this work is to introduce modular processing mechanisms for cortical functions implemented in networks of spiking neurons. Neural maps are a feature of cortical processing found to be generic throughout sensory cortical areas, and self-organisation to the fundamental properties of input spike trains has been shown to be an important property of cortical organisation. Additionally, oscillatory behaviour, temporal coding of information, and learning through spike timing dependent plasticity are all frequently observed in the cortex. The traditional self-organising map (SOM) algorithm attempts to capture the computational properties of this cortical self-organisation in a neural network. As such, a cognitive module for a spiking SOM using oscillations, phasic coding and STDP has been implemented. This model is capable of mapping to distributions of input data in a manner consistent with the traditional SOM algorithm, and of categorising generic input data sets. Higher-level cortical processing areas appear to feature a hierarchical category structure that is founded on a feature-based object representation. The spiking SOM model is therefore extended to facilitate input patterns in the form of sets of binary feature-object relations, such as those seen in the field of formal concept analysis. It is demonstrated that this extended model is capable of learning to represent the hierarchical conceptual structure of an input data set using the existing learning scheme. Furthermore, manipulations of network parameters allow the level of hierarchy used for either learning or recall to be adjusted, and the network is capable of learning comparable representations when trained with incomplete input patterns. Together these two modules provide related approaches to the generation of both topographic mapping and hierarchical representation of input spaces that can be potentially combined and used as the basis for advanced spiking neuron models of the learning of complex representations

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions
    corecore