29 research outputs found

    Stochastic Resonance in Neuron Models: Endogenous Stimulation Revisited

    Full text link
    The paradigm of stochastic resonance (SR)---the idea that signal detection and transmission may benefit from noise---has met with great interest in both physics and the neurosciences. We investigate here the consequences of reducing the dynamics of a periodically driven neuron to a renewal process (stimulation with reset or endogenous stimulation). This greatly simplifies the mathematical analysis, but we show that stochastic resonance as reported earlier occurs in this model only as a consequence of the reduced dynamics.Comment: Some typos fixed, esp. Eq. 15. Results and conclusions are not affecte

    Constructing Neuronal Network Models in Massively Parallel Environments

    No full text
    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers

    Noise in Integrate-and-Fire Neurons: From Stochastic Input to Escape Rates

    No full text
    We analyze the effect of noise in integrate-and-fire neurons driven by timedependent input, and compare the diffusion approximation for the membrane potential to escape noise. It is shown that for time-dependent sub-threshold input, diffusive noise can be replaced by escape noise with a hazard function that has a Gaussian dependence upon the distance between the (noise-free) membrane voltage and threshold. The approximation is improved if we add to the hazard function a probability current proportional to the derivative of the voltage. Stochastic resonance in response to periodic input occurs in both noise models and exhibits similar characteristics
    corecore