4 research outputs found

    A Reconfigurable Architecture for Implementing Locally Connected Neural Arrays

    Get PDF
    Moore’s law is rapidly approaching a long-predicted decline, and with it the performance gains of conventional processors are becoming ever more marginal. Cognitive computing systems based on neural networks have the potential to provide a solution to the decline of Moore’s law. Identifying common traits in neural systems can lead to the design of more efficient, robust and adaptable processors. Despite the potentials, large-scale neural systems remain difficult to implement due to constraints on scalability. Here we introduce a new hardware architecture for implementing locally connected neural networks that can model biological systems with a high level of scalability. We validate our architecture using a full model of the locomotion system of the Caenorhabditis elegans. Further, we show that our proposed architecture archives a nine-fold increase in clock speed over existing hardware models. Importantly the clock speed for our architecture is found to be independent of system size, providing an unparalleled level of scalability. Our approach can be applied to the modelling of large neural networks, with greater performance, easier configuration and a high level of scalability

    FlexLearn: Fast and Highly Efficient Brain Simulations Using Flexible On-Chip Learning

    No full text
    To understand how the human brain works, neuroscientists heavily rely on brain simulations which incorporate the concept of time to their operating model. In the simulations, neurons transmit their signals through synapses whose weights change over time and by the activity of the associated neurons. Such changes in synaptic weights, known as learning, are thought to contribute to memory, and various learning rules exist to model different behaviors of the human brain. Due to the diverse neurons and learning rules, neuroscientists perform the simulations using highly programmable general-purpose processors. Unfortunately, the processors greatly suffer from the high computational overheads of the learning rules. As an alternative, brain simulation accelerators achieve orders of magnitude higher performance; however, they have limited flexibility and cannot support the diverse neurons and learning rules. In this paper, we present FlexLearn, a flexible on-chip learning engine to enable fast and highly efficient brain simulations. FlexLearn achieves high flexibility by supporting diverse biologically plausible sub-rules which can be combined to simulate various target learning rules. To design FlexLearn, we first identify 17 representative sub-rules which adjust the synaptic weights in different manners. Then, we design and compact the specialized datapaths for the subrules and identify dependencies between them to maximize parallelism. After that, we present an example flexible brain simulation processor by integrating the datapaths with the state-of-the-art flexible digital neuron and existing accelerator to support end-to-end simulations. Our evaluation using a 45-nm cell library shows that the 128-core brain simulation processor prototype with FlexLearn greatly improves the harmonic mean per-area performance and the energy efficiency by 30.07x and 126.87x, respectively, over the server-class CPU. The prototype also achieves the harmonic mean per-area speedup of 1.41x over the current state-of-the-art 128-core accelerator which supports programmable learning rules.N

    Spiking Neural Computing in Memristive Neuromorphic Platforms

    Get PDF
    International audienceAbstract Neuromorphic computation using Spiking Neural Networks (SNN) is pro-posed as an alternative solution for future of computation to conquer the memorybottelneck issue in recent computer architecture. Different spike codings have beendiscussed to improve data transferring and data processing in neuro-inspired compu-tation paradigms. Choosing the appropriate neural network topology could result inbetter performance of computation, recognition and classification. The model of theneuron is another important factor to design and implement SNN systems. The speedof simulation and implementation, ability of integration to the other elements of thenetwork, and suitability for scalable networks are the factors to select a neuron model.The learning algorithms are significant consideration to train the neural network forweight modification. Improving learning in neuromorphic architecture is feasibleby improving the quality of artificial synapse as well as learning algorithm such asSTDP. In this chapter we proposed a new synapse box that can remember and forget.Furthermore, as the most frequent used unsupervised method for network training inSNN is STDP, we analyze and review the various methods of STDP. The sequentialorder of pre- or postsynaptic spikes occurring across a synapse in an interval of timeleads to defining different STDP methods. Based on the importance of stability aswell as Hebbian competition or anti-Hebbian competition the method will be usedin weight modification. We survey the most significant projects that cause makingneuromorphic platform. The advantages and disadvantages of each neuromorphicplatform are introduced in this chapter
    corecore