2,659 research outputs found

    Slowness: An Objective for Spike-Timing-Dependent Plasticity?

    Get PDF
    Slow Feature Analysis (SFA) is an efficient algorithm for learning input-output functions that extract the most slowly varying features from a quickly varying signal. It has been successfully applied to the unsupervised learning of translation-, rotation-, and other invariances in a model of the visual system, to the learning of complex cell receptive fields, and, combined with a sparseness objective, to the self-organized formation of place cells in a model of the hippocampus. In order to arrive at a biologically more plausible implementation of this learning rule, we consider analytically how SFA could be realized in simple linear continuous and spiking model neurons. It turns out that for the continuous model neuron SFA can be implemented by means of a modified version of standard Hebbian learning. In this framework we provide a connection to the trace learning rule for invariance learning. We then show that for Poisson neurons spike-timing-dependent plasticity (STDP) with a specific learning window can learn the same weight distribution as SFA. Surprisingly, we find that the appropriate learning rule reproduces the typical STDP learning window. The shape as well as the timescale are in good agreement with what has been measured experimentally. This offers a completely novel interpretation for the functional role of spike-timing-dependent plasticity in physiological neurons

    Interplay between subthreshold oscillations and depressing synapses in single neurons

    Full text link
    Latorre R, Torres JJ, Varona P (2016) Interplay between Subthreshold Oscillations and Depressing Synapses in Single Neurons. PLoS ONE 11(1): e0145830. doi:10.1371/journal.pone.0145830In this paper we analyze the interplay between the subthreshold oscillations of a single neuron conductance-based model and the short-term plasticity of a dynamic synapse with a depressing mechanism. In previous research, the computational properties of subthreshold oscillations and dynamic synapses have been studied separately. Our results show that dynamic synapses can influence different aspects of the dynamics of neuronal subthreshold oscillations. Factors such as maximum hyperpolarization level, oscillation amplitude and frequency or the resulting firing threshold are modulated by synaptic depression, which can even make subthreshold oscillations disappear. This influence reshapes the postsynaptic neuron's resonant properties arising from subthreshold oscillations and leads to specific input/output relations. We also study the neuron's response to another simultaneous input in the context of this modulation, and show a distinct contextual processing as a function of the depression, in particular for detection of signals through weak synapses. Intrinsic oscillations dynamics can be combined with the characteristic time scale of the modulatory input received by a dynamic synapse to build cost-effective cell/channel-specific information discrimination mechanisms, beyond simple resonances. In this regard, we discuss the functional implications of synaptic depression modulation on intrinsic subthreshold dynamics.This work was supported by MINECO TIN2012-30883 (RL and PV) and FIS2013-43201-P (JJT). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    StdpC: a modern dynamic clamp

    Get PDF
    With the advancement of computer technology many novel uses of dynamic clamp have become possible. We have added new features to our dynamic clamp software StdpC (“Spike timing-dependent plasticity Clamp”) allowing such new applications while conserving the ease of use and installation of the popular earlier Dynclamp 2/4 package. Here, we introduce the new features of a waveform generator, freely programmable Hodgkin–Huxley conductances, learning synapses, graphic data displays, and a powerful scripting mechanism and discuss examples of experiments using these features. In the first example we built and ‘voltage clamped’ a conductance based model cell from a passive resistor–capacitor (RC) circuit using the dynamic clamp software to generate the voltage-dependent currents. In the second example we coupled our new spike generator through a burst detection/burst generation mechanism in a phase-dependent way to a neuron in a central pattern generator and dissected the subtle interaction between neurons, which seems to implement an information transfer through intraburst spike patterns. In the third example, making use of the new plasticity mechanism for simulated synapses, we analyzed the effect of spike timing-dependent plasticity (STDP) on synchronization revealing considerable enhancement of the entrainment of a post-synaptic neuron by a periodic spike train. These examples illustrate that with modern dynamic clamp software like StdpC, the dynamic clamp has developed beyond the mere introduction of artificial synapses or ionic conductances into neurons to a universal research tool, which might well become a standard instrument of modern electrophysiology

    Spatiotemporal adaptation through corticothalamic loops: A hypothesis

    Get PDF
    The thalamus is the major gate to the cortex and its control over cortical responses is well established. Cortical feedback to the thalamus is, in turn, the anatomically dominant input to relay cells, yet its influence on thalamic processing has been difficult to interpret. For an understanding of complex sensory processing, detailed concepts of the corticothalamic interplay need yet to be established. Drawing on various physiological and anatomical data, we elaborate the novel hypothesis that the visual cortex controls the spatiotemporal structure of cortical receptive fields via feedback to the lateral geniculate nucleus. Furthermore, we present and analyze a model of corticogeniculate loops that implements this control, and exhibit its ability of object segmentation by statistical motion analysis in the visual field

    Network Plasticity as Bayesian Inference

    Full text link
    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web page http://www.igi.tugraz.at/kappe

    Stochasticity from function -- why the Bayesian brain may need no noise

    Get PDF
    An increasing body of evidence suggests that the trial-to-trial variability of spiking activity in the brain is not mere noise, but rather the reflection of a sampling-based encoding scheme for probabilistic computing. Since the precise statistical properties of neural activity are important in this context, many models assume an ad-hoc source of well-behaved, explicit noise, either on the input or on the output side of single neuron dynamics, most often assuming an independent Poisson process in either case. However, these assumptions are somewhat problematic: neighboring neurons tend to share receptive fields, rendering both their input and their output correlated; at the same time, neurons are known to behave largely deterministically, as a function of their membrane potential and conductance. We suggest that spiking neural networks may, in fact, have no need for noise to perform sampling-based Bayesian inference. We study analytically the effect of auto- and cross-correlations in functionally Bayesian spiking networks and demonstrate how their effect translates to synaptic interaction strengths, rendering them controllable through synaptic plasticity. This allows even small ensembles of interconnected deterministic spiking networks to simultaneously and co-dependently shape their output activity through learning, enabling them to perform complex Bayesian computation without any need for noise, which we demonstrate in silico, both in classical simulation and in neuromorphic emulation. These results close a gap between the abstract models and the biology of functionally Bayesian spiking networks, effectively reducing the architectural constraints imposed on physical neural substrates required to perform probabilistic computing, be they biological or artificial
    • …
    corecore