631 research outputs found
Deep Neural Networks - A Brief History
Introduction to deep neural networks and their history.Comment: 14 pages, 14 figure
Enhanced entrainability of genetic oscillators by period mismatch
Biological oscillators coordinate individual cellular components so that they
function coherently and collectively. They are typically composed of multiple
feedback loops, and period mismatch is unavoidable in biological
implementations. We investigated the advantageous effect of this period
mismatch in terms of a synchronization response to external stimuli.
Specifically, we considered two fundamental models of genetic circuits: smooth-
and relaxation oscillators. Using phase reduction and Floquet multipliers, we
numerically analyzed their entrainability under different coupling strengths
and period ratios. We found that a period mismatch induces better entrainment
in both types of oscillator; the enhancement occurs in the vicinity of the
bifurcation on their limit cycles. In the smooth oscillator, the optimal period
ratio for the enhancement coincides with the experimentally observed ratio,
which suggests biological exploitation of the period mismatch. Although the
origin of multiple feedback loops is often explained as a passive mechanism to
ensure robustness against perturbation, we study the active benefits of the
period mismatch, which include increasing the efficiency of the genetic
oscillators. Our findings show a qualitatively different perspective for both
the inherent advantages of multiple loops and their essentiality.Comment: 28 pages, 13 figure
Desynchronizing effect of high-frequency stimulation in a generic cortical network model
Transcranial Electrical Stimulation (TCES) and Deep Brain Stimulation (DBS)
are two different applications of electrical current to the brain used in
different areas of medicine. Both have a similar frequency dependence of their
efficiency, with the most pronounced effects around 100Hz. We apply
superthreshold electrical stimulation, specifically depolarizing DC current,
interrupted at different frequencies, to a simple model of a population of
cortical neurons which uses phenomenological descriptions of neurons by
Izhikevich and synaptic connections on a similar level of sophistication. With
this model, we are able to reproduce the optimal desynchronization around
100Hz, as well as to predict the full frequency dependence of the efficiency of
desynchronization, and thereby to give a possible explanation for the action
mechanism of TCES.Comment: 9 pages, figs included. Accepted for publication in Cognitive
Neurodynamic
GeNN: a code generation framework for accelerated brain simulations
Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ.
GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials,
Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/
Reinforcement learning in populations of spiking neurons
Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning
- …