32 research outputs found

    Python Scripting in the Nengo Simulator

    Get PDF
    Nengo (http://nengo.ca) is an open-source neural simulator that has been greatly enhanced by the recent addition of a Python script interface. Nengo provides a wide range of features that are useful for physiological simulations, including unique features that facilitate development of population-coding models using the neural engineering framework (NEF). This framework uses information theory, signal processing, and control theory to formalize the development of large-scale neural circuit models. Notably, it can also be used to determine the synaptic weights that underlie observed network dynamics and transformations of represented variables. Nengo provides rich NEF support, and includes customizable models of spike generation, muscle dynamics, synaptic plasticity, and synaptic integration, as well as an intuitive graphical user interface. All aspects of Nengo models are accessible via the Python interface, allowing for programmatic creation of models, inspection and modification of neural parameters, and automation of model evaluation. Since Nengo combines Python and Java, it can also be integrated with any existing Java or 100% Python code libraries. Current work includes connecting neural models in Nengo with existing symbolic cognitive models, creating hybrid systems that combine detailed neural models of specific brain regions with higher-level models of remaining brain areas. Such hybrid models can provide (1) more realistic boundary conditions for the neural components, and (2) more realistic sub-components for the larger cognitive models

    Trends in Programming Languages for Neuroscience Simulations

    Get PDF
    Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing

    Runtime Construction of Large-Scale Spiking Neuronal Network Models on GPU Devices

    Get PDF
    Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies while still meeting the flexibility demands of explorative network modeling

    The art of scaling up : a computational account on action selection in basal ganglia

    Get PDF
    International audienceWhat makes a model 'large scale' ? Is it the number of neu-rons modeled? Or the number of structures modeled in a network? Most of the higher cognitive processes span across coordinated activity in different brain areas. However at the same time, the basic information transfer takes place at a single neuron level, together with multiple other neurons. We explore modeling a neural system involving some areas of cortex, the basal ganglia (BG) and thalamus for the process of decision making, using a large-scale neural engineering framework, Nengo. Early results tend to replicate the known neural activity patterns as found in the previous action selection model [2], besides operating with a larger neuronal populations. The power of converting algorithms to efficiently weighed neural networks in Nengo [10, 1] is exploited in this work. Crucial aspects in a computational model, like parameter tuning and detailed neural implementations, while moving from a simplistic to large-scale model, are studied

    Information Representation on a Universal Neural Chip

    Get PDF

    Parallel computing for brain simulation

    Get PDF
    [Abstract] Background: The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. Aims: For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. Conclusion: This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing.Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; GRC2014/049Galicia. Consellería de Cultura, Educación e Ordenación Universitaria; R2014/039Instituto de Salud Carlos III; PI13/0028

    Design Methods for Large Scale Photonic Spiking Neural Networks

    Get PDF
    Silicon Photonics is a promising technology to develop neuromorphic hardware accelerators. Most optical neural networks rely on wavelength division multiplexing (WDM), which calls for power-hungry calibration to compensate for the non-uniformity fabrication process and thermal variations of microring resonators (MRR). This imposes practical limitations on neuromorphic photonic hardware since only a small number of synaptic connections per neuron can be implemented. As a result, the mapping of neural networks (NN) on a hardware platform requires the pruning of synaptic connections, which drastically affects the accuracy. In this work, we address these limitations from two directions. First, we proposed a method to map pre-trained NN on an all-optical spiking neural network (SNN). The technique relies on weight partitioning and unrolling to reduce synaptic connections. This method aims to improve hardware utilization while minimizing accuracy loss. The resulting neural networks are mapped on an architecture we propose, allowing us to estimate accuracy and energy consumption. Results show the capability of weight partitioning to implement a realistic NN while attaining a 58\% reduction in energy consumption compared with unrolling. Second, a synaptic weighting architecture is proposed to implement weighting while reducing the number of required MRRs by half thus simplifying the calibration requirements. The architecture was simulated to demonstrate its capability of performing synaptic weighting. These methods together introduce design directions that can work around constraints of photonic spiking neural network architectures and help reach toward realizing large-scale photonic spiking neural networks
    corecore