124 research outputs found

    Emulating long-term synaptic dynamics with memristive devices

    Get PDF
    The potential of memristive devices is often seeing in implementing neuromorphic architectures for achieving brain-like computation. However, the designing procedures do not allow for extended manipulation of the material, unlike CMOS technology, the properties of the memristive material should be harnessed in the context of such computation, under the view that biological synapses are memristors. Here we demonstrate that single solid-state TiO2 memristors can exhibit associative plasticity phenomena observed in biological cortical synapses, and are captured by a phenomenological plasticity model called triplet rule. This rule comprises of a spike-timing dependent plasticity regime and a classical hebbian associative regime, and is compatible with a large amount of electrophysiology data. Via a set of experiments with our artificial, memristive, synapses we show that, contrary to conventional uses of solid-state memory, the co-existence of field- and thermally-driven switching mechanisms that could render bipolar and/or unipolar programming modes is a salient feature for capturing long-term potentiation and depression synaptic dynamics. We further demonstrate that the non-linear accumulating nature of memristors promotes long-term potentiating or depressing memory transitions

    Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses

    Get PDF
    Recent evidence in rodent cerebral cortex and olfactory bulb suggests that short-term dynamics of excitatory synaptic transmission is correlated to stereotypical connectivity motifs. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form unidirectional pairwise connections. The cause of these structural differences in synaptic microcircuits is unknown. We propose that these connectivity motifs emerge from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. We formulate a computational model by combining SD and STDP, which captures faithfully short- and long-term dependence on both spike times and frequency. As a proof of concept, we simulate recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical background inputs, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. This holds for heterogeneous networks including both facilitating and depressing synapses. Our study highlights the conditions under which SD-STDP might the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs. We further suggest experiments for the validation of the proposed mechanism

    Measuring symmetry, asymmetry and randomness in neural network connectivity

    Get PDF
    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity

    Short-term and spike-timing-dependent plasticity facilitate the formation of modular neural networks

    Get PDF
    The brain has the phenomenal ability to reorganise itself by forming new connections among neurons and by pruning others. The so-called neural or brain plasticity facilitates the modification of brain structure and function over different time scales. Plasticity might occur due to external stimuli received from the environment, during recovery from brain injury, or due to modifications within the body and brain itself. In this paper, we study the combined effect of short-term (STP) and spike-timing-dependent plastic- ity (STDP) on the synaptic strength of excitatory coupled Hodgkin-Huxley neurons and show that plasticity can facilitate the formation of modular neural networks with complex topologies that resemble those of networks with preferential attachment properties. In particular, we use an STDP rule that alters the synaptic coupling intensity based on time intervals between spikes of postsynaptic and presynaptic neurons. Previous work has shown that STDP may induce the emergence of directed connections from high to low frequency spiking neurons. On the other hand, STP is attributed to the release of neurotransmitters in the synaptic cleft of neurons that alter its synaptic efficiency. Our results suggest that the combined effect of STP and STDP with long recovery times facilitates the formation of connections among neurons with similar spike frequencies only, a kind of preferential at- tachment. We then pursue this further and show that, when starting with all-to-all neural configurations, depending on the STP recovery time and dis- tribution of neural frequencies, modular neural networks can emerge as a direct result of the combined effect of STP and STDP

    Embodying a Computational Model of Hippocampal Replay for Robotic Reinforcement Learning

    Get PDF
    Hippocampal reverse replay has been speculated to play an important role in biological reinforcement learning since its discovery over a decade ago. Whilst a number of computational models have recently emerged in an attempt to understand the dynamics of hippocampal replay, there has been little progress in testing and implementing these models in real-world robotics settings. Presented first in this body of work then is a bio-inspired hippocampal CA3 network model. It runs in real-time to produce reverse replays of recent spatio-temporal sequences, represented as place cell activities, in a robotic spatial navigation task. The model is based on two very recent computational models of hippocampal reverse replay. An analysis of these models show that, in their original forms, they are each insufficient for effective performance when applied to a robot. As such, choosing particular elements from each allows for a computational model that is sufficient for application in a robotic task. Having a model of reverse replay applied successfully in a robot provides the groundwork necessary for testing the ways in which reverse replay contributes to reinforcement learning. The second portion of the work presented here builds on a previous reinforcement learning neural network model of a basic hippocampal-striatal circuit using a three-factor learning rule. By integrating reverse replays into this reinforcement learning model, results show that reverse replay, with its ability to replay the recent trajectory both in the hippocampal circuit and the striatal circuit, can speed up the learning process. In addition, for situations where the original reinforcement learning model performs poorly, such as when its time dynamics do not sufficiently store enough of the robot's behavioural history for effective learning, the reverse replay model can compensate for this by replaying the recent history. These results are inline with experimental findings showing that disruption of awake hippocampal replay events severely diminishes, but does not entirely eliminate, reinforcement learning. This work provides possible insights into the important role that reverse replays could contribute to mnemonic function, and reinforcement learning in particular; insights that could benefit the robotic, AI, and neuroscience communities. However, there is still much to be done. How reverse replays are initiated is still an ongoing research problem, for instance. Furthermore, the model presented here generates place cells heuristically, but there are computational models tackling the problem of how hippocampal cells such as place cells, but also grid cells and head direction cells, emerge. This leads to the pertinent question of asking how these models, which make assumptions about their network architectures and dynamics, could integrate with the computational models of hippocampal replay which make their own assumptions on network architectures and dynamics

    Emulating short-term synaptic dynamics with memristive devices

    Get PDF
    Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems

    Emulating long-term synaptic dynamics with memristive devices

    Get PDF
    The potential of memristive devices is often seeing in implementing neuromorphic architectures for achieving brain-like computation. However, the designing procedures do not allow for extended manipulation of the material, unlike CMOS technology, the properties of the memristive material should be harnessed in the context of such computation, under the view that biological synapses are memristors. Here we demonstrate that single solid-state TiO2 memristors can exhibit associative plasticity phenomena observed in biological cortical synapses, and are captured by a phenomenological plasticity model called triplet rule. This rule comprises of a spike-timing dependent plasticity regime and a classical hebbian associative regime, and is compatible with a large amount of electrophysiology data. Via a set of experiments with our artificial, memristive, synapses we show that, contrary to conventional uses of solid-state memory, the co-existence of field- and thermally-driven switching mechanisms that could render bipolar and/or unipolar programming modes is a salient feature for capturing long-term potentiation and depression synaptic dynamics. We further demonstrate that the non-linear accumulating nature of memristors promotes long-term potentiating or depressing memory transitions

    An investigation into the neural substrates of virtue to determine the key place of virtues in human moral development

    Get PDF
    Virtues, as described by Aristotle and Aquinas, are understood as dispositions of character to behave in habitual, specific, positive ways; virtue is a critical requirement for human flourishing. From the perspective of Aristotelian-Thomistic anthropology which offers an integrated vision of the material and the rational in the human person, I seek to identify the neural bases for the development and exercise of moral virtue. First I review current neuroscientific knowledge of the capacity of the brain to structure according to experience, to facilitate behaviours, to regulate emotional responses and support goal election. Then, having identified characteristics of moral virtue in the light of the distinctions between cardinal virtues, I propose neural substrates by mapping neuroscientific knowledge to these characteristics. I then investigate the relationship between virtue, including its neurobiological features, and human flourishing. This process allows a contemporary and evidence-based corroboration for a model of moral development based on growth in virtue as understood by Aristotle and Aquinas, and a demonstration of a biological aptitude and predisposition for the development of virtue. Conclusions are drawn with respect to science, ethics, and parenting
    • …
    corecore