4,689 research outputs found

    Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Full text link
    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a `basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.Comment: submitted to Scientific Repor

    Short-term plasticity as cause-effect hypothesis testing in distal reward learning

    Get PDF
    Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity as to which stimuli, actions, and rewards are causally related. Only the repetition of reward episodes helps distinguish true cause-effect relationships from coincidental occurrences. In the model proposed here, a novel plasticity rule employs short and long-term changes to evaluate hypotheses on cause-effect relationships. Transient weights represent hypotheses that are consolidated in long-term memory only when they consistently predict or cause future rewards. The main objective of the model is to preserve existing network topologies when learning with ambiguous information flows. Learning is also improved by biasing the exploration of the stimulus-response space towards actions that in the past occurred before rewards. The model indicates under which conditions beliefs can be consolidated in long-term memory, it suggests a solution to the plasticity-stability dilemma, and proposes an interpretation of the role of short-term plasticity.Comment: Biological Cybernetics, September 201

    A Supervised STDP-based Training Algorithm for Living Neural Networks

    Full text link
    Neural networks have shown great potential in many applications like speech recognition, drug discovery, image classification, and object detection. Neural network models are inspired by biological neural networks, but they are optimized to perform machine learning tasks on digital computers. The proposed work explores the possibilities of using living neural networks in vitro as basic computational elements for machine learning applications. A new supervised STDP-based learning algorithm is proposed in this work, which considers neuron engineering constrains. A 74.7% accuracy is achieved on the MNIST benchmark for handwritten digit recognition.Comment: 5 pages, 3 figures, Accepted by ICASSP 201

    A neuro-inspired system for online learning and recognition of parallel spike trains, based on spike latency and heterosynaptic STDP

    Full text link
    Humans perform remarkably well in many cognitive tasks including pattern recognition. However, the neuronal mechanisms underlying this process are not well understood. Nevertheless, artificial neural networks, inspired in brain circuits, have been designed and used to tackle spatio-temporal pattern recognition tasks. In this paper we present a multineuronal spike pattern detection structure able to autonomously implement online learning and recognition of parallel spike sequences (i.e., sequences of pulses belonging to different neurons/neural ensembles). The operating principle of this structure is based on two spiking/synaptic neurocomputational characteristics: spike latency, that enables neurons to fire spikes with a certain delay and heterosynaptic plasticity, that allows the own regulation of synaptic weights. From the perspective of the information representation, the structure allows mapping a spatio-temporal stimulus into a multidimensional, temporal, feature space. In this space, the parameter coordinate and the time at which a neuron fires represent one specific feature. In this sense, each feature can be considered to span a single temporal axis. We applied our proposed scheme to experimental data obtained from a motor inhibitory cognitive task. The test exhibits good classification performance, indicating the adequateness of our approach. In addition to its effectiveness, its simplicity and low computational cost suggest a large scale implementation for real time recognition applications in several areas, such as brain computer interface, personal biometrics authentication or early detection of diseases.Comment: Submitted to Frontiers in Neuroscienc

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    STDP-driven networks and the \emph{C. elegans} neuronal network

    Full text link
    We study the dynamics of the structure of a formal neural network wherein the strengths of the synapses are governed by spike-timing-dependent plasticity (STDP). For properly chosen input signals, there exists a steady state with a residual network. We compare the motif profile of such a network with that of a real neural network of \emph{C. elegans} and identify robust qualitative similarities. In particular, our extensive numerical simulations show that this STDP-driven resulting network is robust under variations of the model parameters.Comment: 16 pages, 14 figure
    corecore