1,568 research outputs found

    Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    Get PDF
    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Comparing Neuromorphic Solutions in Action : Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms

    Get PDF
    Copyright © 2016 Diamond, Nowotny and Schmuker. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and "neuromorphic algorithms" are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing.Peer reviewe

    Biologically inspired distributed machine cognition: a new formal approach to hyperparallel computation

    Get PDF
    The irresistable march toward multiple-core chip technology presents currently intractable pdrogramming challenges. High level mental processes in many animals, and their analogs for social structures, appear similarly massively parallel, and recent mathematical models addressing them may be adaptable to the multi-core programming problem

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    A distributed framework for semi-automatically developing architectures of brain and mind

    Get PDF
    Developing comprehensive theories of low-level neuronal brain processes and high-level cognitive behaviours, as well as integrating them, is an ambitious challenge that requires new conceptual, computational, and empirical tools. Given the complexities of these theories, they will almost certainly be expressed as computational systems. Here, we propose to use recent developments in grid technology to develop a system of evolutionary scientific discovery, which will (a) enable empirical researchers to make their data widely available for use in developing and testing theories, and (b) enable theorists to semi-automatically develop computational theories. We illustrate these ideas with a case study taken from the domain of categorisation

    Usage and Scaling of an Open-Source Spiking Multi-Area Model of Monkey Cortex

    Full text link
    We are entering an age of `big' computational neuroscience, in which neural network models are increasing in size and in numbers of underlying data sets. Consolidating the zoo of models into large-scale models simultaneously consistent with a wide range of data is only possible through the effort of large teams, which can be spread across multiple research institutions. To ensure that computational neuroscientists can build on each other's work, it is important to make models publicly available as well-documented code. This chapter describes such an open-source model, which relates the connectivity structure of all vision-related cortical areas of the macaque monkey with their resting-state dynamics. We give a brief overview of how to use the executable model specification, which employs NEST as simulation engine, and show its runtime scaling. The solutions found serve as an example for organizing the workflow of future models from the raw experimental data to the visualization of the results, expose the challenges, and give guidance for the construction of ICT infrastructure for neuroscience

    Synaptic Plasticity in Memristive Artificial Synapses and Their Robustness Against Noisy Inputs

    Get PDF
    Emerging brain-inspired neuromorphic computing paradigms require devices that can emulate the complete functionality of biological synapses upon different neuronal activities in order to process big data flows in an efficient and cognitive manner while being robust against any noisy input. The memristive device has been proposed as a promising candidate for emulating artificial synapses due to their complex multilevel and dynamical plastic behaviors. In this work, we exploit ultrastable analog BiFeO3 (BFO)-based memristive devices for experimentally demonstrating that BFO artificial synapses support various long-term plastic functions, i.e., spike timing-dependent plasticity (STDP), cycle number-dependent plasticity (CNDP), and spiking rate-dependent plasticity (SRDP). The study on the impact of electrical stimuli in terms of pulse width and amplitude on STDP behaviors shows that their learning windows possess a wide range of timescale configurability, which can be a function of applied waveform. Moreover, beyond SRDP, the systematical and comparative study on generalized frequency-dependent plasticity (FDP) is carried out, which reveals for the first time that the ratio modulation between pulse width and pulse interval time within one spike cycle can result in both synaptic potentiation and depression effect within the same firing frequency. The impact of intrinsic neuronal noise on the STDP function of a single BFO artificial synapse can be neglected because thermal noise is two orders of magnitude smaller than the writing voltage and because the cycle-to-cycle variation of the current–voltage characteristics of a single BFO artificial synapses is small. However, extrinsic voltage fluctuations, e.g., in neural networks, cause a noisy input into the artificial synapses of the neural network. Here, the impact of extrinsic neuronal noise on the STDP function of a single BFO artificial synapse is analyzed in order to understand the robustness of plastic behavior in memristive artificial synapses against extrinsic noisy input

    Synaptic motor adaptation: A three-factor learning rule for adaptive robotic control in spiking neural networks

    Full text link
    Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware
    corecore