4,649 research outputs found

    Deep Learning with the Random Neural Network and its Applications

    Full text link
    The random neural network (RNN) is a mathematical model for an "integrate and fire" spiking network that closely resembles the stochastic behaviour of neurons in mammalian brains. Since its proposal in 1989, there have been numerous investigations into the RNN's applications and learning algorithms. Deep learning (DL) has achieved great success in machine learning. Recently, the properties of the RNN for DL have been investigated, in order to combine their power. Recent results demonstrate that the gap between RNNs and DL can be bridged and the DL tools based on the RNN are faster and can potentially be used with less energy expenditure than existing methods.Comment: 23 pages, 19 figure

    Encoding Neural and Synaptic Functionalities in Electron Spin: A Pathway to Efficient Neuromorphic Computing

    Full text link
    Present day computers expend orders of magnitude more computational resources to perform various cognitive and perception related tasks that humans routinely perform everyday. This has recently resulted in a seismic shift in the field of computation where research efforts are being directed to develop a neurocomputer that attempts to mimic the human brain by nanoelectronic components and thereby harness its efficiency in recognition problems. Bridging the gap between neuroscience and nanoelectronics, this paper attempts to provide a review of the recent developments in the field of spintronic device based neuromorphic computing. Description of various spin-transfer torque mechanisms that can be potentially utilized for realizing device structures mimicking neural and synaptic functionalities is provided. A cross-layer perspective extending from the device to the circuit and system level is presented to envision the design of an All-Spin neuromorphic processor enabled with on-chip learning functionalities. Device-circuit-algorithm co-simulation framework calibrated to experimental results suggest that such All-Spin neuromorphic systems can potentially achieve almost two orders of magnitude energy improvement in comparison to state-of-the-art CMOS implementations.Comment: The paper will appear in a future issue of Applied Physics Review

    Efficient single input-output layer spiking neural classifier with time-varying weight model

    Full text link
    This paper presents a supervised learning algorithm, namely, the Synaptic Efficacy Function with Meta-neuron based learning algorithm (SEF-M) for a spiking neural network with a time-varying weight model. For a given pattern, SEF-M uses the learning algorithm derived from meta-neuron based learning algorithm to determine the change in weights corresponding to each presynaptic spike times. The changes in weights modulate the amplitude of a Gaussian function centred at the same presynaptic spike times. The sum of amplitude modulated Gaussian functions represents the synaptic efficacy functions (or time-varying weight models). The performance of SEF-M is evaluated against state-of-the-art spiking neural network learning algorithms on 10 benchmark datasets from UCI machine learning repository. Performance studies show superior generalization ability of SEF-M. An ablation study on time-varying weight model is conducted using JAFFE dataset. The results of the ablation study indicate that using a time-varying weight model instead of single weight model improves the classification accuracy by 14%. Thus, it can be inferred that a single input-output layer spiking neural network with time-varying weight model is computationally more efficient than a multi-layer spiking neural network with long-term or short-term weight model.Comment: 8 pages, 2 figure

    A Survey of Neuromorphic Computing and Neural Networks in Hardware

    Full text link
    Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history. We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications. We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled. The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

    Whetstone: A Method for Training Deep Artificial Neural Networks for Binary Communication

    Full text link
    This paper presents a new technique for training networks for low-precision communication. Targeting minimal communication between nodes not only enables the use of emerging spiking neuromorphic platforms, but may additionally streamline processing conventionally. Low-power and embedded neuromorphic processors potentially offer dramatic performance-per-Watt improvements over traditional von Neumann processors, however programming these brain-inspired platforms generally requires platform-specific expertise which limits their applicability. To date, the majority of artificial neural networks have not operated using discrete spike-like communication. We present a method for training deep spiking neural networks using an iterative modification of the backpropagation optimization algorithm. This method, which we call Whetstone, effectively and reliably configures a network for a spiking hardware target with little, if any, loss in performance. Whetstone networks use single time step binary communication and do not require a rate code or other spike-based coding scheme, thus producing networks comparable in timing and size to conventional ANNs, albeit with binarized communication. We demonstrate Whetstone on a number of image classification networks, describing how the sharpening process interacts with different training optimizers and changes the distribution of activity within the network. We further note that Whetstone is compatible with several non-classification neural network applications, such as autoencoders and semantic segmentation. Whetstone is widely extendable and currently implemented using custom activation functions within the Keras wrapper to the popular TensorFlow machine learning framework

    BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity

    Full text link
    The problem of training spiking neural networks (SNNs) is a necessary precondition to understanding computations within the brain, a field still in its infancy. Previous work has shown that supervised learning in multi-layer SNNs enables bio-inspired networks to recognize patterns of stimuli through hierarchical feature acquisition. Although gradient descent has shown impressive performance in multi-layer (and deep) SNNs, it is generally not considered biologically plausible and is also computationally expensive. This paper proposes a novel supervised learning approach based on an event-based spike-timing-dependent plasticity (STDP) rule embedded in a network of integrate-and-fire (IF) neurons. The proposed temporally local learning rule follows the backpropagation weight change updates applied at each time step. This approach enjoys benefits of both accurate gradient descent and temporally local, efficient STDP. Thus, this method is able to address some open questions regarding accurate and efficient computations that occur in the brain. The experimental results on the XOR problem, the Iris data, and the MNIST dataset demonstrate that the proposed SNN performs as successfully as the traditional NNs. Our approach also compares favorably with the state-of-the-art multi-layer SNNs

    Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible to Various Temporal Codes

    Full text link
    Conventional modeling approaches have found limitations in matching the increasingly detailed neural network structures and dynamics recorded in experiments to the diverse brain functionalities. On another approach, studies have demonstrated to train spiking neural networks for simple functions using supervised learning. Here, we introduce a modified SpikeProp learning algorithm, which achieved better learning stability in different activity states. In addition, we show biological realistic features such as lateral connections and sparse activities can be included in the network. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, which are MNIST digits recognition, spatial coordinate transformation, and motor sequence generation. Moreover, we find several characteristic features have evolved alongside the task training, such as selective activity, excitatory-inhibitory balance, and weak pair-wise correlation. The coincidence between the self-evolved and experimentally observed features indicates their importance on the brain functionality. Our results suggest a unified setting in which diverse cognitive computations and mechanisms can be studied.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version will be supersede

    A Cognitive Architecture Based on a Learning Classifier System with Spiking Classifiers

    Full text link
    Learning Classifier Systems (LCS) are population-based reinforcement learners that were originally designed to model various cognitive phenomena. This paper presents an explicitly cognitive LCS by using spiking neural networks as classifiers, providing each classifier with a measure of temporal dynamism. We employ a constructivist model of growth of both neurons and synaptic connections, which permits a Genetic Algorithm (GA) to automatically evolve sufficiently-complex neural structures. The spiking classifiers are coupled with a temporally-sensitive reinforcement learning algorithm, which allows the system to perform temporal state decomposition by appropriately rewarding "macro-actions," created by chaining together multiple atomic actions. The combination of temporal reinforcement learning and neural information processing is shown to outperform benchmark neural classifier systems, and successfully solve a robotic navigation task

    The Future of Neural Networks

    Full text link
    The paper describes some recent developments in neural networks and discusses the applicability of neural networks in the development of a machine that mimics the human brain. The paper mentions a new architecture, the pulsed neural network that is being considered as the next generation of neural networks. The paper also explores the use of memristors in the development of a brain-like computer called the MoNETA. A new model, multi/infinite dimensional neural networks, are a recent development in the area of advanced neural networks. The paper concludes that the need of neural networks in the development of human-like technology is essential and may be non-expendable for it.Comment: 6 pages, 2 figure

    Flexible statistical inference for mechanistic models of neural dynamics

    Full text link
    Mechanistic models of single-neuron dynamics have been extensively studied in computational neuroscience. However, identifying which models can quantitatively reproduce empirically measured data has been challenging. We propose to overcome this limitation by using likelihood-free inference approaches (also known as Approximate Bayesian Computation, ABC) to perform full Bayesian inference on single-neuron models. Our approach builds on recent advances in ABC by learning a neural network which maps features of the observed data to the posterior distribution over parameters. We learn a Bayesian mixture-density network approximating the posterior over multiple rounds of adaptively chosen simulations. Furthermore, we propose an efficient approach for handling missing features and parameter settings for which the simulator fails, as well as a strategy for automatically learning relevant features using recurrent neural networks. On synthetic data, our approach efficiently estimates posterior distributions and recovers ground-truth parameters. On in-vitro recordings of membrane voltages, we recover multivariate posteriors over biophysical parameters, which yield model-predicted voltage traces that accurately match empirical data. Our approach will enable neuroscientists to perform Bayesian inference on complex neuron models without having to design model-specific algorithms, closing the gap between mechanistic and statistical approaches to single-neuron modelling.Comment: NIPS 2017. The first two authors contributed equall
    • …
    corecore