12 research outputs found

    Model architecture for associative memory in a neural network of spiking neurons

    Get PDF
    AbstractA synaptic connectivity model is assembled on a spiking neuron network aiming to build up a dynamic pattern recognition system. The connection architecture includes gap junctions and both inhibitory and excitatory chemical synapses based on Hebb’s hypothesis. The network evolution resulting from external stimulus is sampled in a properly defined frequency space. Neurons’ responses to different current injections are mapped onto a subspace using Principal Component Analysis. Departing from the base attractor, related to a quiescent state, different external stimuli drive the network to different fixed points through specific trajectories in this subspace

    Self-organization in the olfactory system: one shot odor recognition in insects

    Get PDF
    We show in a model of spiking neurons that synaptic plasticity in the mushroom bodies in combination with the general fan-in, fan-out properties of the early processing layers of the olfactory system might be sufficient to account for its efficient recognition of odors. For a large variety of initial conditions the model system consistently finds a working solution without any fine-tuning, and is, therefore, inherently robust. We demonstrate that gain control through the known feedforward inhibition of lateral horn interneurons increases the capacity of the system but is not essential for its general function. We also predict an upper limit for the number of odor classes Drosophila can discriminate based on the number and connectivity of its olfactory neurons

    Clustering Predicts Memory Performance in Networks of Spiking and Non-Spiking Neurons

    Get PDF
    The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong negative linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network

    Statistical physics of neural systems with non-additive dendritic coupling

    Full text link
    How neurons process their inputs crucially determines the dynamics of biological and artificial neural networks. In such neural and neural-like systems, synaptic input is typically considered to be merely transmitted linearly or sublinearly by the dendritic compartments. Yet, single-neuron experiments report pronounced supralinear dendritic summation of sufficiently synchronous and spatially close-by inputs. Here, we provide a statistical physics approach to study the impact of such non-additive dendritic processing on single neuron responses and the performance of associative memory tasks in artificial neural networks. First, we compute the effect of random input to a neuron incorporating nonlinear dendrites. This approach is independent of the details of the neuronal dynamics. Second, we use those results to study the impact of dendritic nonlinearities on the network dynamics in a paradigmatic model for associative memory, both numerically and analytically. We find that dendritic nonlinearities maintain network convergence and increase the robustness of memory performance against noise. Interestingly, an intermediate number of dendritic branches is optimal for memory functionality

    Performance- and Stimulus-Dependent Oscillations in Monkey Prefrontal Cortex During Short-Term Memory

    Get PDF
    Short-term memory requires the coordination of sub-processes like encoding, retention, retrieval and comparison of stored material to subsequent input. Neuronal oscillations have an inherent time structure, can effectively coordinate synaptic integration of large neuron populations and could therefore organize and integrate distributed sub-processes in time and space. We observed field potential oscillations (14–95 Hz) in ventral prefrontal cortex of monkeys performing a visual memory task. Stimulus-selective and performance-dependent oscillations occurred simultaneously at 65–95 Hz and 14–50 Hz, the latter being phase-locked throughout memory maintenance. We propose that prefrontal oscillatory activity may be instrumental for the dynamical integration of local and global neuronal processes underlying short-term memory

    Non-invasive Brain actuated control of a mobile robot by Human EEG

    Get PDF
    Brain activity recorded non-invasively is sufficient to control a moblie robot if advanced robotics is used in combination with asynchronous EEG analysis and machine learning techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEG based systems have bben considered tto slow for controlling rapid and complex sequences of movements. We show that two human subjects successfully moved a robot between several rooms by mental control only using an EEG based brain-machine interface that recognized three mental states. Mental control was comparable to manual control on the same task with a preformance ration of 0.74

    Biological constraints on neural network models of cognitive function

    Get PDF
    Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative and hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning, to implementation of inhibition and control, along with neuroanatomical properties including area structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, based on these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling

    Improving Associative Memory in a Network of Spiking Neurons

    Get PDF
    In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work. The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information. In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance. To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory

    Love or hate: differential expression of synaptic receptors following odor preference versus aversive learning in rat pups?

    Get PDF
    Odor plus 0.5mA shock conditioning paradoxically induces odor preference in rat pups (≤PD10), while a strong 1.2mA shock results in odor aversion (Sullivan, 2001). Previous research showed that anterior piriform cortex (aPC) is activated following odor preference training with a 0.5mA shock, while posterior piriform cortex (pPC) is activated following odor aversive learning with a 1.2mA shock. The olfactory bulb (OB) is activated by both and serves as a common structure (Raineki, Shionoya, Sander, & Sullivan, 2009a). As a first step to delineate synapses involved in preference and avoidance learning, we measured expressions of glutamatergic AMPA GluR1 and NMDA NR1 receptors in the OB, aPC, & pPC using a synaptoneurosome preparation following odor+shock conditioning. Our results show that a shock of 0.5mA and 0.1mA, produced preference 24-hours following learning indicating that aversive experiences can produce preference in neonatal rodents as previously reported by Sullivan (2001).. Moreover, our results illustrated that this shock resulted in down-regulation of NMDARs 3-hours following training but not of AMPARs in the OB. Our behavioural results did not produce odor aversion with the strong shock training and likewise there was no change in pPC, suggesting no change in total number of NMDARs and AMPARs perhaps due to an absence of odor aversion learning. Future experiments can delineate whether different paradigms will produce odor aversion to produce synaptic expression in the pPC. Additional experimental protocols can also assess if each region is engaged in synaptic trafficking of NMDAR and AMPARs within the extrasynaptic and synaptic sites
    corecore