22 research outputs found

    Emulating long-term synaptic dynamics with memristive devices

    Get PDF
    The potential of memristive devices is often seeing in implementing neuromorphic architectures for achieving brain-like computation. However, the designing procedures do not allow for extended manipulation of the material, unlike CMOS technology, the properties of the memristive material should be harnessed in the context of such computation, under the view that biological synapses are memristors. Here we demonstrate that single solid-state TiO2 memristors can exhibit associative plasticity phenomena observed in biological cortical synapses, and are captured by a phenomenological plasticity model called triplet rule. This rule comprises of a spike-timing dependent plasticity regime and a classical hebbian associative regime, and is compatible with a large amount of electrophysiology data. Via a set of experiments with our artificial, memristive, synapses we show that, contrary to conventional uses of solid-state memory, the co-existence of field- and thermally-driven switching mechanisms that could render bipolar and/or unipolar programming modes is a salient feature for capturing long-term potentiation and depression synaptic dynamics. We further demonstrate that the non-linear accumulating nature of memristors promotes long-term potentiating or depressing memory transitions

    Emulating long-term synaptic dynamics with memristive devices

    Get PDF
    The potential of memristive devices is often seeing in implementing neuromorphic architectures for achieving brain-like computation. However, the designing procedures do not allow for extended manipulation of the material, unlike CMOS technology, the properties of the memristive material should be harnessed in the context of such computation, under the view that biological synapses are memristors. Here we demonstrate that single solid-state TiO2 memristors can exhibit associative plasticity phenomena observed in biological cortical synapses, and are captured by a phenomenological plasticity model called triplet rule. This rule comprises of a spike-timing dependent plasticity regime and a classical hebbian associative regime, and is compatible with a large amount of electrophysiology data. Via a set of experiments with our artificial, memristive, synapses we show that, contrary to conventional uses of solid-state memory, the co-existence of field- and thermally-driven switching mechanisms that could render bipolar and/or unipolar programming modes is a salient feature for capturing long-term potentiation and depression synaptic dynamics. We further demonstrate that the non-linear accumulating nature of memristors promotes long-term potentiating or depressing memory transitions

    Gradient estimation in dendritic reinforcement learning

    Get PDF
    We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability

    Towards Optimally Efficient Search with Deep Learning for Large-Scale MIMO Systems

    Get PDF
    This paper investigates the optimal signal detection problem with a particular interest in large-scale multiple-input multiple-output (MIMO) systems. The problem is NP-hard and can be solved optimally by searching the shortest path on the decision tree. Unfortunately, the existing optimal search algorithms often involve prohibitively high complexities, which indicates that they are infeasible in large-scale MIMO systems. To address this issue, we propose a general heuristic search algorithm, namely, hyper-accelerated tree search (HATS) algorithm. The proposed algorithm employs a deep neural network (DNN) to estimate the optimal heuristic, and then use the estimated heuristic to speed up the underlying memory-bounded search algorithm. This idea is inspired by the fact that the underlying heuristic search algorithm reaches the optimal efficiency with the optimal heuristic function. Simulation results show that the proposed algorithm reaches almost the optimal bit error rate (BER) performance in large-scale systems, while the memory size can be bounded. In the meanwhile, it visits nearly the fewest tree nodes. This indicates that the proposed algorithm reaches almost the optimal efficiency in practical scenarios, and and thereby it is applicable for large-scale systems. Besides, the code for this paper is available at https://github.com/skypitcher/hats

    Maximization of Learning Speed in the Motor Cortex Due to Neuronal Redundancy

    Get PDF
    Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed
    corecore