6 research outputs found

    On Application Design for Manycore Processing Systems in the Domain of Neuroscience

    Get PDF
    This doctoral work examines the potential impact of manycore processors, and the evolution of their architecture, on complex high-precision mathematical modelling of human-neuron networks. As an end product, this dissertation offers a simulator that utilizes a variety of approaches to high-performance computing in order to present an efficient solution for studying demanding neuronal models, in terms of both performance and energy

    BrainFrame: A node-level heterogeneous accelerator platform for neuron simulations

    Get PDF
    Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU

    GPU Implementation of Neural-Network Simulations Based on Adaptive-Exponential Models

    No full text
    Detailed brain modeling has been presenting significant challenges to the world of high-performance computing (HPC), posing computational problems that can benefit from modern hardware-acceleration technologies. We explore the capacity of GPUs for simulating large-scale neuronal networks based on the Adaptive Exponential neuron-model, which is widely used in the neuroscientific community. Our GPU-powered simulator acts as a benchmark to evaluate the strengths and limitations of modern GPUs, as well as to explore their scaling properties when simulating large neural networks. This work presents an optimized GPU implementation that outperforms a reference multicore implementation by 50x, whereas utilizing a dual-GPU configuration can deliver a speedup of 90x for networks of 20,000 fully interconnected AdEx neurons
    corecore