14,222 research outputs found

    Dopamine gates action potential backpropagation in midbrain dopaminergic neurons

    Get PDF
    Dopamine is released from both axonal and somatodendritic sites of midbrain dopaminergic neurons in an action potential-dependent manner. In contrast to the majority of central neurons, the axon of dopaminergic neurons typically originates from a dendritic site, suggesting a specialized computational function. Here, we examine the initiation and spread of action potentials in dopaminergic neurons of the substantia nigra pars reticulata and reveal that the displacement of the axon to a dendritic site allows highly compartmentalized electrical signaling. In response to a train of synaptic input, action potentials initiated at axon-bearing dendritic sites formed a variable trigger for invasion to the soma and contralateral dendritic tree, with action potentials often confined to the axon-bearing dendrite. The application of dopamine increased this form of electrical compartmentalization, an effect mediated by a tonic membrane potential hyperpolarization leading to an increased availability of a class of voltage-dependent potassium channel. These data suggest that the release of dopamine from axonal and somatodendritic sites are dissociable, and that dopamine levels within the midbrain are dynamically controlled by the somatodendritic spread of action potentials

    Training Multi-layer Spiking Neural Networks using NormAD based Spatio-Temporal Error Backpropagation

    Full text link
    Spiking neural networks (SNNs) have garnered a great amount of interest for supervised and unsupervised learning applications. This paper deals with the problem of training multi-layer feedforward SNNs. The non-linear integrate-and-fire dynamics employed by spiking neurons make it difficult to train SNNs to generate desired spike trains in response to a given input. To tackle this, first the problem of training a multi-layer SNN is formulated as an optimization problem such that its objective function is based on the deviation in membrane potential rather than the spike arrival instants. Then, an optimization method named Normalized Approximate Descent (NormAD), hand-crafted for such non-convex optimization problems, is employed to derive the iterative synaptic weight update rule. Next, it is reformulated to efficiently train multi-layer SNNs, and is shown to be effectively performing spatio-temporal error backpropagation. The learning rule is validated by training 22-layer SNNs to solve a spike based formulation of the XOR problem as well as training 33-layer SNNs for generic spike based training problems. Thus, the new algorithm is a key step towards building deep spiking neural networks capable of efficient event-triggered learning.Comment: 19 pages, 10 figure

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001
    • …
    corecore