1,687 research outputs found

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Axp: A hw-sw co-design pipeline for energy-efficient approximated convnets via associative matching

    Get PDF
    The reduction in energy consumption is key for deep neural networks (DNNs) to ensure usability and reliability, whether they are deployed on low-power end-nodes with limited resources or high-performance platforms that serve large pools of users. Leveraging the over-parametrization shown by many DNN models, convolutional neural networks (ConvNets) in particular, energy efficiency can be improved substantially preserving the model accuracy. The solution proposed in this work exploits the intrinsic redundancy of ConvNets to maximize the reuse of partial arithmetic results during the inference stages. Specifically, the weight-set of a given ConvNet is discretized through a clustering procedure such that the largest possible number of inner multiplications fall into predefined bins; this allows an off-line computation of the most frequent results, which in turn can be stored locally and retrieved when needed during the forward pass. Such a reuse mechanism leads to remarkable energy savings with the aid of a custom processing element (PE) that integrates an associative memory with a standard floating-point unit (FPU). Moreover, the adoption of an approximate associative rule based on a partial bit-match increases the hit rate over the pre-computed results, maximizing the energy reduction even further. Results collected on a set of ConvNets trained for computer vision and speech processing tasks reveal that the proposed associative-based hw-sw co-design achieves up to 77% in energy savings with less than 1% in accuracy loss

    Learning and Leveraging Neural Memories

    Get PDF
    Learning in the Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) has been recently extended beyond the supervised Prescribed Error Sensitivity (PES) to include the unsupervised Vector Oja (Voja). This thesis demonstrates how the combination of these learning rules can be used to learn associative memories. Moreover, these techniques are used to provide explanations of two behaving cognitive phenomena that are modeled with spiking neurons. First, the standard progression of cognitive addition strategies from counting to memorization, as occurs in children, is modelled as a transfer of skills. Initially, addition by counting is performed in the slow basal ganglia based system, before being overtaken by a rapid cortical associative memory as a type of pre-frontal, cortical consolidation. Second, a word-pair recognition task, where two distinct types of word-pairs are memorized, is modelled. The Voja learning rule is modified to match temporal lobe magnetoencephalography (MEG) data generated by each word-pair type observed during the task. This empirically grounds the associative memory model, which has not been possible using other cognitive modeling paradigms. The distinct implementation of Voja for each area, pre-frontal and temporal, demonstrates the different roles that the areas perform during learning
    • …
    corecore