7,476 research outputs found

    Learning-based Approaches for Controlling Neural Spiking

    Get PDF
    We consider the problem of controlling populations of interconnected neurons using extrinsic stimulation. Such a problem, which is relevant to applications in both basic neuroscience as well as brain medicine, is challenging due to the nonlinearity of neuronal dynamics and the highly unpredictable structure of underlying neuronal networks. Compounding this difficulty is the fact that most neurostimulation technologies offer a single degree of freedom to actuate tens to hundreds of interconnected neurons. To meet these challenges, here we consider an adaptive, learning-based approach to controlling neural spike trains. Rather than explicitly modeling neural dynamics and designing optimal controls, we instead synthesize a so-called control network (CONET) that interacts with the spiking network by maximizing the Shannon mutual information between it and the realized spiking outputs. Thus, the CONET learns a representation of the spiking network that subsequently allows it to learn suitable control signals through a reinforcement-type mechanism. We demonstrate feasibility of the approach by controlling networks of stochastic spiking neurons, wherein desired patterns are induced for neuron-to-actuator ratios in excess of 10 to 1

    Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays.

    Get PDF
    Resistive RAM crossbar arrays offer an attractive solution to minimize off-chip data transfer and parallelize on-chip computations for neural networks. Here, we report a hardware/software co-design approach based on low energy subquantum conductive bridging RAM (CBRAM®) devices and a network pruning technique to reduce network level energy consumption. First, we demonstrate low energy subquantum CBRAM devices exhibiting gradual switching characteristics important for implementing weight updates in hardware during unsupervised learning. Then we develop a network pruning algorithm that can be employed during training, different from previous network pruning approaches applied for inference only. Using a 512 kbit subquantum CBRAM array, we experimentally demonstrate high recognition accuracy on the MNIST dataset for digital implementation of unsupervised learning. Our hardware/software co-design approach can pave the way towards resistive memory based neuro-inspired systems that can autonomously learn and process information in power-limited settings

    Adaptive motor control and learning in a spiking neural network realised on a mixed-signal neuromorphic processor

    Full text link
    Neuromorphic computing is a new paradigm for design of both the computing hardware and algorithms inspired by biological neural networks. The event-based nature and the inherent parallelism make neuromorphic computing a promising paradigm for building efficient neural network based architectures for control of fast and agile robots. In this paper, we present a spiking neural network architecture that uses sensory feedback to control rotational velocity of a robotic vehicle. When the velocity reaches the target value, the mapping from the target velocity of the vehicle to the correct motor command, both represented in the spiking neural network on the neuromorphic device, is autonomously stored on the device using on-chip plastic synaptic weights. We validate the controller using a wheel motor of a miniature mobile vehicle and inertia measurement unit as the sensory feedback and demonstrate online learning of a simple 'inverse model' in a two-layer spiking neural network on the neuromorphic chip. The prototype neuromorphic device that features 256 spiking neurons allows us to realise a simple proof of concept architecture for the purely neuromorphic motor control and learning. The architecture can be easily scaled-up if a larger neuromorphic device is available.Comment: 6+1 pages, 4 figures, will appear in one of the Robotics conference

    Comparing Offline Decoding Performance in Physiologically Defined Neuronal Classes

    Get PDF
    Objective: Recently, several studies have documented the presence of a bimodal distribution of spike waveform widths in primary motor cortex. Although narrow and wide spiking neurons, corresponding to the two modes of the distribution, exhibit different response properties, it remains unknown if these differences give rise to differential decoding performance between these two classes of cells. Approach: We used a Gaussian mixture model to classify neurons into narrow and wide physiological classes. Using similar-size, random samples of neurons from these two physiological classes, we trained offline decoding models to predict a variety of movement features. We compared offline decoding performance between these two physiologically defined populations of cells. Main results: We found that narrow spiking neural ensembles decode motor parameters better than wide spiking neural ensembles including kinematics, kinetics, and muscle activity. Significance: These findings suggest that the utility of neural ensembles in brain machine interfaces may be predicted from their spike waveform widths

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Principles of Neuromorphic Photonics

    Full text link
    In an age overrun with information, the ability to process reams of data has become crucial. The demand for data will continue to grow as smart gadgets multiply and become increasingly integrated into our daily lives. Next-generation industries in artificial intelligence services and high-performance computing are so far supported by microelectronic platforms. These data-intensive enterprises rely on continual improvements in hardware. Their prospects are running up against a stark reality: conventional one-size-fits-all solutions offered by digital electronics can no longer satisfy this need, as Moore's law (exponential hardware scaling), interconnection density, and the von Neumann architecture reach their limits. With its superior speed and reconfigurability, analog photonics can provide some relief to these problems; however, complex applications of analog photonics have remained largely unexplored due to the absence of a robust photonic integration industry. Recently, the landscape for commercially-manufacturable photonic chips has been changing rapidly and now promises to achieve economies of scale previously enjoyed solely by microelectronics. The scientific community has set out to build bridges between the domains of photonic device physics and neural networks, giving rise to the field of \emph{neuromorphic photonics}. This article reviews the recent progress in integrated neuromorphic photonics. We provide an overview of neuromorphic computing, discuss the associated technology (microelectronic and photonic) platforms and compare their metric performance. We discuss photonic neural network approaches and challenges for integrated neuromorphic photonic processors while providing an in-depth description of photonic neurons and a candidate interconnection architecture. We conclude with a future outlook of neuro-inspired photonic processing.Comment: 28 pages, 19 figure
    corecore