24 research outputs found

    PCSIM: A Parallel Simulation Environment for Neural Circuits Fully Integrated with Python

    Get PDF
    The Parallel Circuit SIMulator (PCSIM) is a software package for simulation of neural circuits. It is primarily designed for distributed simulation of large scale networks of spiking point neurons. Although its computational core is written in C++, PCSIM's primary interface is implemented in the Python programming language, which is a powerful programming environment and allows the user to easily integrate the neural circuit simulator with data analysis and visualization tools to manage the full neural modeling life cycle. The main focus of this paper is to describe PCSIM's full integration into Python and the benefits thereof. In particular we will investigate how the automatically generated bidirectional interface and PCSIM's object-oriented modular framework enable the user to adopt a hybrid modeling approach: using and extending PCSIM's functionality either employing pure Python or C++ and thus combining the advantages of both worlds. Furthermore, we describe several supplementary PCSIM packages written in pure Python and tailored towards setting up and analyzing neural simulations

    A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

    Get PDF
    Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics

    PyNN: A Common Interface for Neuronal Network Simulators

    Get PDF
    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN

    Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Get PDF
    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons

    Theoretical analysis of learning with rewardmodulated Spike-Timing-Dependent Plasticity

    No full text
    Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

    NEVESIM: event-driven neural simulation framework with a Python interface

    No full text
    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies

    The visual perception experiment of [<b>21</b>] that demonstrates “explaining away” and its corresponding Bayesian network model.

    No full text
    <p><b>A</b>) Two visual stimuli, each exhibiting the same luminance profile in the horizontal direction, differ only with regard to their contours, which suggest different 3D shapes (flat versus cylindrical). This in turn influences our perception of the reflectance of the two halves of each stimulus (a step in the reflectance at the middle line, versus uniform reflectance): the cylindrical 3D shape “explains away”the reflectance step. <b>B</b>) The Bayesian network that models this effect represents the probability distribution . The relative reflectance () of the two halves is either different ( = 1) or the same ( = 0). The perceived 3D shape can be cylindrical ( = 1) or flat ( = 0). The relative reflectance and the 3D shape are direct causes of the shading (luminance change) of the surfaces (), which can have the profile like in panel A ( = 1) or a different one ( = 0). The 3D shape of the surfaces causes different perceived contours, flat ( = 0) or cylindrical ( = 1). The observed variables (evidence) are the contour () and the shading (). Subjects infer the marginal posterior probability distributions of the relative reflectance and the 3D shape based on the evidence. <b>C</b>) The RVs are represented in our neural implementations by principal neurons . Each spike of sets the RV to 1 for a time period of length . <b>D</b>) The structure of a network of spiking neurons that performs probabilistic inference for the Bayesian network of panel B through sampling from conditionals of the underlying distribution. Each principal neuron employs preprocessing to satisfy the NCC, either by dendritic processing or by a preprocessing circuit.</p

    Implementation 2 for the explaining away motif of the Bayesian network from <b>Fig. 1B</b>.

    No full text
    <p>Implementation 2 is the neural implementation with auxiliary neurons, that uses the Markov blanket expansion of the log-odd ratio. There are 4 auxiliary neurons, one for each possible value assignment to the RVs and in the Markov blanket of . The principal neuron () connects to the auxiliary neuron directly if () has value 1 in the assignment , or via an inhibitory inter-neuron if () has value 0 in . The auxiliary neurons connect with a strong excitatory connection to the principal neuron , and drive it to fire whenever any one of them fires. The larger gray circle represents the lateral inhibition between the auxiliary neurons.</p

    Values for the conditional probabilities in the ASIA Bayesian network used in Computer Simulation II.

    No full text
    <p>Values for the conditional probabilities in the ASIA Bayesian network used in Computer Simulation II.</p

    The randomly generated Bayesian network used in Computer Simulation III.

    No full text
    <p>It contains 20 nodes. Each node has up to 8 parents. We consider the generic but more difficult instance for probabilistic inference where evidence is entered for nodes in the lower part of the directed graph. The conditional probability tables were also randomly generated for all RVs.</p
    corecore