53 research outputs found

    Scalable event-driven modelling architectures for neuromimetic hardware

    Get PDF
    Neural networks present a fundamentally different model of computation from the conventional sequential digital model. Dedicated hardware may thus be more suitable for executing them. Given that there is no clear consensus on the model of computation in the brain, model flexibility is at least as important a characteristic of neural hardware as is performance acceleration. The SpiNNaker chip is an example of the emerging 'neuromimetic' architecture, a universal platform that specialises the hardware for neural networks but allows flexibility in model choice. It integrates four key attributes: native parallelism, event-driven processing, incoherent memory and incremental reconfiguration, in a system combining an array of general-purpose processors with a configurable asynchronous interconnect. Making such a device usable in practice requires an environment for instantiating neural models on the chip that allows the user to focus on model characteristics rather than on hardware details. The central part of this system is a library of predesigned, 'drop-in' event-driven neural components that specify their specific implementation on SpiNNaker. Three exemplar models: two spiking networks and a multilayer perceptron network, illustrate techniques that provide a basis for the library and demonstrate a reference methodology that can be extended to support third-party library components not only on SpiNNaker but on any configurable neuromimetic platform. Experiments demonstrate the capability of the library model to implement efficient on-chip neural networks, but also reveal important hardware limitations, particularly with respect to communications, that require careful design. The ultimate goal is the creation of a library-based development system that allows neural modellers to work in the high-level environment of their choice, using an automated tool chain to create the appropriate SpiNNaker instantiation. Such a system would enable the use of the hardware to explore abstractions of biological neurodynamics that underpin a functional model of neural computation.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    SpiNNaker: Fault tolerance in a power- and area- constrained large-scale neuromimetic architecture

    Get PDF
    AbstractSpiNNaker is a biologically-inspired massively-parallel computer designed to model up to a billion spiking neurons in real-time. A full-fledged implementation of a SpiNNaker system will comprise more than 105 integrated circuits (half of which are SDRAMs and half multi-core systems-on-chip). Given this scale, it is unavoidable that some components fail and, in consequence, fault-tolerance is a foundation of the system design. Although the target application can tolerate a certain, low level of failures, important efforts have been devoted to incorporate different techniques for fault tolerance. This paper is devoted to discussing how hardware and software mechanisms collaborate to make SpiNNaker operate properly even in the very likely scenario of component failures and how it can tolerate system-degradation levels well above those expected

    GeNN: a code generation framework for accelerated brain simulations

    Get PDF
    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/

    Behavioral Learning in a Cognitive Neuromorphic Robot: An Integrative Approach

    Get PDF
    We present here a learning system using the iCub humanoid robot and the SpiNNaker neuromorphic chip to solve the real-world task of object-specific attention. Integrating spiking neural networks with robots introduces considerable complexity for questionable benefit if the objective is simply task performance. But, we suggest, in a cognitive robotics context, where the goal is understanding how to compute, such an approach may yield useful insights to neural architecture as well as learned behavior, especially if dedicated neural hardware is available. Recent advances in cognitive robotics and neuromorphic processing now make such systems possible. Using a scalable, structured, modular approach, we build a spiking neural network where the effects and impact of learning can be predicted and tested, and the network can be scaled or extended to new tasks automatically. We introduce several enhancements to a basic network and show how they can be used to direct performance toward behaviorally relevant goals. Results show that using a simple classical spike-timing-dependent plasticity (STDP) rule on selected connections, we can get the robot (and network) to progress from poor task-specific performance to good performance. Behaviorally relevant STDP appears to contribute strongly to positive learning: “do this” but less to negative learning: “don't do that.” In addition, we observe that the effect of structural enhancements tends to be cumulative. The overall system suggests that it is by being able to exploit combinations of effects, rather than any one effect or property in isolation, that spiking networks can achieve compelling, task-relevant behavior
    • …
    corecore