62 research outputs found

    Course 13 Of the evolution of the brain

    Get PDF

    Embodying a Computational Model of Hippocampal Replay for Robotic Reinforcement Learning

    Get PDF
    Hippocampal reverse replay has been speculated to play an important role in biological reinforcement learning since its discovery over a decade ago. Whilst a number of computational models have recently emerged in an attempt to understand the dynamics of hippocampal replay, there has been little progress in testing and implementing these models in real-world robotics settings. Presented first in this body of work then is a bio-inspired hippocampal CA3 network model. It runs in real-time to produce reverse replays of recent spatio-temporal sequences, represented as place cell activities, in a robotic spatial navigation task. The model is based on two very recent computational models of hippocampal reverse replay. An analysis of these models show that, in their original forms, they are each insufficient for effective performance when applied to a robot. As such, choosing particular elements from each allows for a computational model that is sufficient for application in a robotic task. Having a model of reverse replay applied successfully in a robot provides the groundwork necessary for testing the ways in which reverse replay contributes to reinforcement learning. The second portion of the work presented here builds on a previous reinforcement learning neural network model of a basic hippocampal-striatal circuit using a three-factor learning rule. By integrating reverse replays into this reinforcement learning model, results show that reverse replay, with its ability to replay the recent trajectory both in the hippocampal circuit and the striatal circuit, can speed up the learning process. In addition, for situations where the original reinforcement learning model performs poorly, such as when its time dynamics do not sufficiently store enough of the robot's behavioural history for effective learning, the reverse replay model can compensate for this by replaying the recent history. These results are inline with experimental findings showing that disruption of awake hippocampal replay events severely diminishes, but does not entirely eliminate, reinforcement learning. This work provides possible insights into the important role that reverse replays could contribute to mnemonic function, and reinforcement learning in particular; insights that could benefit the robotic, AI, and neuroscience communities. However, there is still much to be done. How reverse replays are initiated is still an ongoing research problem, for instance. Furthermore, the model presented here generates place cells heuristically, but there are computational models tackling the problem of how hippocampal cells such as place cells, but also grid cells and head direction cells, emerge. This leads to the pertinent question of asking how these models, which make assumptions about their network architectures and dynamics, could integrate with the computational models of hippocampal replay which make their own assumptions on network architectures and dynamics

    Spike-based computational models of bio-inspired memories in the hippocampal CA3 region on SpiNNaker

    Get PDF
    The human brain is the most powerful and efficient machine in existence today, surpassing in many ways the ca pabilities of modern computers. Currently, lines of research in neuromorphic engineering are trying to develop hardware that mimics the functioning of the brain to acquire these superior capabilities. One of the areas still under development is the design of bio-inspired memories, where the hippocampus plays an important role. This region of the brain acts as a short-term memory with the ability to store associations of information from different sensory streams in the brain and recall them later. This is possible thanks to the recurrent collateral network architecture that constitutes CA3, the main sub-region of the hippocampus. In this work, we developed two spike-based computational models of fully functional hippocampal bio-inspired memories for the storage and recall of complex patterns implemented with spiking neural networks on the SpiNNaker hardware platform. These models present different levels of biological abstraction, with the first model having a constant oscillatory activity closer to the biological model, and the second one having an energy efficient regulated activity, which, although it is still bio-inspired, opts for a more functional approach. Different experiments were performed for each of the models, in order to test their learn ing/recalling capabilities. A comprehensive comparison between the functionality and the biological plausibility of the presented models was carried out, showing their strengths and weaknesses. The two models, which are publicly available for researchers, could pave the way for future spike-based implementations and applications.Agencia Estatal de InvestigaciĆ³n PID2019-105556GB-C33/AEI/10.13039/501100011033 (MINDROB

    Inventing episodic memory : a theory of dorsal and ventral hippocampus

    Get PDF

    Constraining the function of CA1 in associative memory models of the hippocampus

    Get PDF
    Institute for Adaptive and Neural ComputationCA1 is the main source of afferents from the hippocampus, but the function of CA1 and its perforant path (PP) input remains unclear. In this thesis, Marrā€™s model of the hippocampus is used to investigate previously hypothesized functions, and also to investigate some of Marrā€™s unexplored theoretical ideas. The last part of the thesis explains the excitatory responses to PP activity in vivo, despite inhibitory responses in vitro. Quantitative support for the idea of CA1 as a relay of information from CA3 to the neocortex and subiculum is provided by constraining Marrā€™s model to experimental data. Using the same approach, the much smaller capacity of the PP input by comparison implies it is not a one-shot learning network. In turn, it is argued that the entorhinal-CA1 connections cannot operate as a short-term memory network through reverberating activity. The PP input to CA1 has been hypothesized to control the activity of CA1 pyramidal cells. Marr suggested an algorithm for self-organising the output activity during pattern storage. Analytic calculations show a greater capacity for self-organised patterns than random patterns for low connectivities and high loads, confirmed in simulations over a broader parameter range. This superior performance is maintained in the absence of complex thresholding mechanisms, normally required to maintain performance levels in the sparsely connected networks. These results provide computational motivation for CA3 to establish patterns of CA1 activity without involvement from the PP input. The recent report of CA1 place cell activity with CA3 lesioned (Brun et al., 2002. Science, 296(5576):2243-6) is investigated using an integrate-and-fire neuron model of the entorhinal-CA1 network. CA1 place field activity is learnt, despite a completely inhibitory response to the stimulation of entorhinal afferents. In the model, this is achieved using N-methyl-D-asparate receptors to mediate a significant proportion of the excitatory response. Place field learning occurs over a broad parameter space. It is proposed that differences between similar contexts are slowly learnt in the PP and as a result are amplified in CA1. This would provide improved spatial memory in similar but different contexts

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Spatial Representations in the Entorhino-Hippocampal Circuit

    Get PDF
    After a general introduction and a brief review of the available experimental data on spatial representations (chapter 2), this thesis is divided into two main parts. The first part, comprising the chapters from 3 to 6, is dedicated to grid cells. In chapter 3 we present and discuss the various models proposed for explaining grid cells formation. In chapter 4 and 5 we study our model of grid cells generation based on adaptation in the case of non-planar environments, namely in the case of a spherical environment and of three-dimensional space. In chapter 6 we propose a variant of the model where the alignment of the grid axes is induced through reciprocal inhibition, and we suggest that that the inhibitory connections obtained during this learning process can be used to implement a continuous attractor in mEC. The second part, comprising chapters from 7 to 10 is instead focused on place cell representations. In chapter 7 we analyze the differences between place cells and grid cells in terms on information content, in chapter 8 we describe the properties of attractor dynamics in our model of the Ca3 net- work, and in the following chapter we study the effects of theta oscillations on network dynamics. Finally, in Chapter 10 we analyze to what extent the learning of a new representation, can preserve the topology and the exact metric of physical space

    Learning to Discriminate Through Long-Term Changes of Dynamical Synaptic Transmission

    Get PDF
    Short-term synaptic plasticity is modulated by long-term synaptic changes. There is, however, no general agreement on the computational role of this interaction. Here, we derive a learning rule for the release probability and the maximal synaptic conductance in a circuit model with combined recurrent and feedforward connections that allows learning to discriminate among natural inputs. Short-term synaptic plasticity thereby provides a nonlinear expansion of the input space of a linear classifier, whereas the random recurrent network serves to decorrelate the expanded input space. Computer simulations reveal that the twofold increase in the number of input dimensions through short-term synaptic plasticity improves the performance of a standard perceptron up to 100%. The distributions of release probabilities and maximal synaptic conductances at the capacity limit strongly depend on the balance between excitation and inhibition. The model also suggests a new computational interpretation of spikes evoked by stimuli outside the classical receptive field. These neuronal activitiesmay reflect decorrelation of the expanded stimulus space by intracortical synaptic connections

    Brain Computations and Connectivity [2nd edition]

    Get PDF
    This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations. Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed. The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes. Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions. This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press. Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
    • ā€¦
    corecore