39 research outputs found

    Topological Effects of Synaptic Time Dependent Plasticity

    Full text link
    We show that the local Spike Timing-Dependent Plasticity (STDP) rule has the effect of regulating the trans-synaptic weights of loops of any length within a simulated network of neurons. We show that depending on STDP's polarity, functional loops are formed or eliminated in networks driven to normal spiking conditions by random, partially correlated inputs, where functional loops comprise weights that exceed a non-zero threshold. We further prove that STDP is a form of loop-regulating plasticity for the case of a linear network comprising random weights drawn from certain distributions. Thus a notable local synaptic learning rule makes a specific prediction about synapses in the brain in which standard STDP is present: that under normal spiking conditions, they should participate in predominantly feed-forward connections at all scales. Our model implies that any deviations from this prediction would require a substantial modification to the hypothesized role for standard STDP. Given its widespread occurrence in the brain, we predict that STDP could also regulate long range synaptic loops among individual neurons across all brain scales, up to, and including, the scale of global brain network topology.Comment: 26 pages, 5 figure

    Self-referential forces are sufficient to explain different dendritic morphologies

    Get PDF
    © 2013 Memelli, Torben-Nielsen and Kozloski. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etcDendritic morphology constrains brain activity, as it determines first which neuronal circuits are possible and second which dendritic computations can be performed over a neuron's inputs. It is known that a range of chemical cues can influence the final shape of dendrites during development. Here, we investigate the extent to which self-referential influences, cues generated by the neuron itself, might influence morphology. To this end, we developed a phenomenological model and algorithm to generate virtual morphologies, which are then compared to experimentally reconstructed morphologies. In the model, branching probability follows a Galton-Watson process, while the geometry is determined by "homotypic forces" exerting influence on the direction of random growth in a constrained space. We model three such homotypic forces, namely an inertial force based on membrane stiffness, a soma-oriented tropism, and a force of self-avoidance, as directional biases in the growth algorithm. With computer simulations we explored how each bias shapes neuronal morphologies. We show that based on these principles, we can generate realistic morphologies of several distinct neuronal types. We discuss the extent to which homotypic forces might influence real dendritic morphologies, and speculate about the influence of other environmental cues on neuronal shape and circuitry.Peer reviewedFinal Published versio

    Integration of AI and mechanistic modeling in generative adversarial networks for stochastic inverse problems

    Full text link
    The problem of finding distributions of input parameters for deterministic mechanistic models to match distributions of model outputs to stochastic observations, i.e., the "Stochastic Inverse Problem" (SIP), encompasses a range of common tasks across a variety of scientific disciplines. Here, we demonstrate that SIP could be reformulated as a constrained optimization problem and adapted for applications in intervention studies to simultaneously infer model input parameters for two sets of observations, under control conditions and under an intervention. In the constrained optimization problem, the solution of SIP is enforced to accommodate the prior knowledge on the model input parameters and to produce outputs consistent with given observations by minimizing the divergence between the inferred distribution of input parameters and the prior. Unlike in standard SIP, the prior incorporates not only knowledge about model input parameters for objects in each set, but also information on the joint distribution or the deterministic map between the model input parameters in two sets of observations. To solve standard and intervention SIP, we employed conditional generative adversarial networks (GANs) and designed novel GANs that incorporate multiple generators and discriminators and have structures that reflect the underlying constrained optimization problems. This reformulation allows us to build computationally scalable solutions to tackle complex model input parameter inference scenarios, which appear routinely in physics, biophysics, economics and other areas, and which currently could not be handled with existing methods

    Constructing a representation of periodicity for temporal pattern recognition in a vertebrate auditory system

    No full text
    The inner ear of the sound-producing fish Pollimyrus (family Mormyridae) consists of two gas-filled bubbles attached to the sacculi otolithic endorgans and shows little specialization for peripheral frequency analysis as is found in the mammalian cochlea. Therefore, questions arise regarding how sounds are represented and analyzed in the central nervous system, and specifically, how temporal pattern recognition arises at midbrain levels, where it has been observed previously. Neuronal pathway tracing reveals that the brainstem auditory pathway is organized around successive levels of processing, which include the auditory nerve, first and second order medullary nuclei, and the auditory midbrain. Physiological responses showed transformations at each successive station. Spike timing at lower levels temporally codes intervals and periods in stimuli over a remarkable range, from one second repetition rates, to 10–80 millisecond inter-click intervals, to the shortest sinusoidal periods that this animal can discriminate (i.e. \u3c1 ms). Temporal coding is extreme and appears limited only by the shortest intervals possible between action potentials. Convergence of independently generated spike trains is likely required for high frequency temporal coding. Evidence for convergence derives from intracellular recordings and from a predictive model, which motivates and is supported by new analyses of spike time data. Finally, a class of second order chopper responses is described. These generate spike trains with predictable temporal structures that are modulated across trials, but which are not correlated to the structure of the stimulus waveform. We conclude that temporal pattern recognition, in the form of narrow band spike rate tuning to stimulus periods and intervals, is an emergent property of midbrain neurons. This conclusion is based on results reported here that broadband temporal coding predominates in both primary afferents and first order medullary neurons. Second order medullary chopper responses radically depart from temporal coding. Instead, patterned spiking in these neurons codes stimulus repetition number over long durations (\u3e20 sec). All results are discussed within the neuroethological context of the communication behaviors that this anatomy and physiology must serve

    Constructing a representation of periodicity for temporal pattern recognition in a vertebrate auditory system

    No full text
    The inner ear of the sound-producing fish Pollimyrus (family Mormyridae) consists of two gas-filled bubbles attached to the sacculi otolithic endorgans and shows little specialization for peripheral frequency analysis as is found in the mammalian cochlea. Therefore, questions arise regarding how sounds are represented and analyzed in the central nervous system, and specifically, how temporal pattern recognition arises at midbrain levels, where it has been observed previously. Neuronal pathway tracing reveals that the brainstem auditory pathway is organized around successive levels of processing, which include the auditory nerve, first and second order medullary nuclei, and the auditory midbrain. Physiological responses showed transformations at each successive station. Spike timing at lower levels temporally codes intervals and periods in stimuli over a remarkable range, from one second repetition rates, to 10–80 millisecond inter-click intervals, to the shortest sinusoidal periods that this animal can discriminate (i.e. \u3c1 ms). Temporal coding is extreme and appears limited only by the shortest intervals possible between action potentials. Convergence of independently generated spike trains is likely required for high frequency temporal coding. Evidence for convergence derives from intracellular recordings and from a predictive model, which motivates and is supported by new analyses of spike time data. Finally, a class of second order chopper responses is described. These generate spike trains with predictable temporal structures that are modulated across trials, but which are not correlated to the structure of the stimulus waveform. We conclude that temporal pattern recognition, in the form of narrow band spike rate tuning to stimulus periods and intervals, is an emergent property of midbrain neurons. This conclusion is based on results reported here that broadband temporal coding predominates in both primary afferents and first order medullary neurons. Second order medullary chopper responses radically depart from temporal coding. Instead, patterned spiking in these neurons codes stimulus repetition number over long durations (\u3e20 sec). All results are discussed within the neuroethological context of the communication behaviors that this anatomy and physiology must serve
    corecore