932 research outputs found

    Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Get PDF
    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. Synaptic sampling machines perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate & fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based synaptic sampling machines outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware

    Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Full text link
    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.Comment: (Under review

    A neuromorphic systems approach to in-memory computing with non-ideal memristive devices: From mitigation to exploitation

    Full text link
    Memristive devices represent a promising technology for building neuromorphic electronic systems. In addition to their compactness and non-volatility features, they are characterized by computationally relevant physical properties, such as state-dependence, non-linear conductance changes, and intrinsic variability in both their switching threshold and conductance values, that make them ideal devices for emulating the bio-physics of real synapses. In this paper we present a spiking neural network architecture that supports the use of memristive devices as synaptic elements, and propose mixed-signal analog-digital interfacing circuits which mitigate the effect of variability in their conductance values and exploit their variability in the switching threshold, for implementing stochastic learning. The effect of device variability is mitigated by using pairs of memristive devices configured in a complementary push-pull mechanism and interfaced to a current-mode normalizer circuit. The stochastic learning mechanism is obtained by mapping the desired change in synaptic weight into a corresponding switching probability that is derived from the intrinsic stochastic behavior of memristive devices. We demonstrate the features of the CMOS circuits and apply the architecture proposed to a standard neural network hand-written digit classification benchmark based on the MNIST data-set. We evaluate the performance of the approach proposed on this benchmark using behavioral-level spiking neural network simulation, showing both the effect of the reduction in conductance variability produced by the current-mode normalizer circuit, and the increase in performance as a function of the number of memristive devices used in each synapse.Comment: 13 pages, 12 figures, accepted for Faraday Discussion

    Contrastive Hebbian Learning with Random Feedback Weights

    Full text link
    Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning
    • …
    corecore