45 research outputs found

    Learning intrinsic excitability in medium spiny neurons

    Full text link
    We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parametrization of individual ion channels on the neuronal activation function. We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal variability on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how variability and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic variability determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.Comment: 20 pages, 8 figure

    Disinhibition Mediates a Form of Hippocampal Long-Term Potentiation in Area CA1

    Get PDF
    The hippocampus plays a central role in memory formation in the mammalian brain. Its ability to encode information is thought to depend on the plasticity of synaptic connections between neurons. In the pyramidal neurons constituting the primary hippocampal output to the cortex, located in area CA1, firing of presynaptic CA3 pyramidal neurons produces monosynaptic excitatory postsynaptic potentials (EPSPs) followed rapidly by feedforward (disynaptic) inhibitory postsynaptic potentials (IPSPs). Long-term potentiation (LTP) of the monosynaptic glutamatergic inputs has become the leading model of synaptic plasticity, in part due to its dependence on NMDA receptors (NMDARs), required for spatial and temporal learning in intact animals. Using whole-cell recording in hippocampal slices from adult rats, we find that the efficacy of synaptic transmission from CA3 to CA1 can be enhanced without the induction of classic LTP at the glutamatergic inputs. Taking care not to directly stimulate inhibitory fibers, we show that the induction of GABAergic plasticity at feedforward inhibitory inputs results in the reduced shunting of excitatory currents, producing a long-term increase in the amplitude of Schaffer collateral-mediated postsynaptic potentials. Like classic LTP, disinhibition-mediated LTP requires NMDAR activation, suggesting a role in types of learning and memory attributed primarily to the former and raising the possibility of a previously unrecognized target for therapeutic intervention in disorders linked to memory deficits, as well as a potentially overlooked site of LTP expression in other areas of the brain

    Long-Term Activity-Dependent Plasticity of Action Potential Propagation Delay and Amplitude in Cortical Networks

    Get PDF
    Background: The precise temporal control of neuronal action potentials is essential for regulating many brain functions. From the viewpoint of a neuron, the specific timings of afferent input from the action potentials of its synaptic partners determines whether or not and when that neuron will fire its own action potential. Tuning such input would provide a powerful mechanism to adjust neuron function and in turn, that of the brain. However, axonal plasticity of action potential timing is counter to conventional notions of stable propagation and to the dominant theories of activity-dependent plasticity focusing on synaptic efficacies. Methodology/Principal Findings: Here we show the occurrence of activity-dependent plasticity of action potentia

    Local Field Potential Modeling Predicts Dense Activation in Cerebellar Granule Cells Clusters under LTP and LTD Control

    Get PDF
    Local field-potentials (LFPs) are generated by neuronal ensembles and contain information about the activity of single neurons. Here, the LFPs of the cerebellar granular layer and their changes during long-term synaptic plasticity (LTP and LTD) were recorded in response to punctate facial stimulation in the rat in vivo. The LFP comprised a trigeminal (T) and a cortical (C) wave. T and C, which derived from independent granule cell clusters, co-varied during LTP and LTD. To extract information about the underlying cellular activities, the LFP was reconstructed using a repetitive convolution (ReConv) of the extracellular potential generated by a detailed multicompartmental model of the granule cell. The mossy fiber input patterns were determined using a Blind Source Separation (BSS) algorithm. The major component of the LFP was generated by the granule cell spike Na+ current, which caused a powerful sink in the axon initial segment with the source located in the soma and dendrites. Reproducing the LFP changes observed during LTP and LTD required modifications in both release probability and intrinsic excitability at the mossy fiber-granule cells relay. Synaptic plasticity and Golgi cell feed-forward inhibition proved critical for controlling the percentage of active granule cells, which was 11% in standard conditions but ranged from 3% during LTD to 21% during LTP and raised over 50% when inhibition was reduced. The emerging picture is that of independent (but neighboring) trigeminal and cortical channels, in which synaptic plasticity and feed-forward inhibition effectively regulate the number of discharging granule cells and emitted spikes generating “dense” activity clusters in the cerebellar granular layer

    A Threshold Equation for Action Potential Initiation

    Get PDF
    In central neurons, the threshold for spike initiation can depend on the stimulus and varies between cells and between recording sites in a given cell, but it is unclear what mechanisms underlie this variability. Properties of ionic channels are likely to play a role in threshold modulation. We examined in models the influence of Na channel activation, inactivation, slow voltage-gated channels and synaptic conductances on spike threshold. We propose a threshold equation which quantifies the contribution of all these mechanisms. It provides an instantaneous time-varying value of the threshold, which applies to neurons with fluctuating inputs. We deduce a differential equation for the threshold, similar to the equations of gating variables in the Hodgkin-Huxley formalism, which describes how the spike threshold varies with the membrane potential, depending on channel properties. We find that spike threshold depends logarithmically on Na channel density, and that Na channel inactivation and K channels can dynamically modulate it in an adaptive way: the threshold increases with membrane potential and after every action potential. Our equation was validated with simulations of a previously published multicompartemental model of spike initiation. Finally, we observed that threshold variability in models depends crucially on the shape of the Na activation function near spike initiation (about −55 mV), while its parameters are adjusted near half-activation voltage (about −30 mV), which might explain why many models exhibit little threshold variability, contrary to experimental observations. We conclude that ionic channels can account for large variations in spike threshold

    Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    Get PDF
    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison

    PHS1 regulates meiotic recombination and homologous chromosome pairing by controlling the transport of RAD50 to the nucleus

    No full text
    Recombination and pairing of homologous chromosomes are critical for bivalent formation in meiotic prophase. In many organisms, including yeast, mammals, and plants, pairing and recombination are intimately interconnected. The POOR HOMOLOGOUS SYNAPSIS1 (PHS1) gene acts in coordination of chromosome pairing and early recombination steps in plants, ensuring pairing fidelity and proper repair of meiotic DNA double-strand-breaks. In phs1 mutants, chromosomes exhibit early recombination defects and frequently associate with non-homologous partners, instead of pairing with their proper homologs. Here, we show that the product of the PHS1 gene is a cytoplasmic protein that functions by controlling transport of RAD50 from cytoplasm to the nucleus. RAD50 is a component of the MRN protein complex that processes meiotic double-strand-breaks to produce single-stranded DNA ends, which act in the homology search and recombination. We demonstrate that PHS1 plays the same role in homologous pairing in both Arabidopsis and maize, whose genomes differ dramatically in size and repetitive element content. This suggests that PHS1 affects pairing of the gene-rich fraction of the genome rather than preventing pairing between repetitive DNA elements. We propose that PHS1 is part of a system that regulates the progression of meiotic prophase by controlling entry of meiotic proteins into the nucleus. We also document that in phs1 mutants in Arabidopsis, centromeres interact before pairing commences along chromosome arms. Centromere coupling was previously observed in yeast and polyploid wheat while our data suggest that it may be a more common feature of meiosis

    Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity

    Get PDF
    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex
    corecore