7 research outputs found

    Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?

    Get PDF
    Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing

    Truncated Variational EM for Semi-Supervised Neural Simpletrons

    Full text link
    Inference and learning for probabilistic generative networks is often very challenging and typically prevents scalability to as large networks as used for deep discriminative approaches. To obtain efficiently trainable, large-scale and well performing generative networks for semi-supervised learning, we here combine two recent developments: a neural network reformulation of hierarchical Poisson mixtures (Neural Simpletrons), and a novel truncated variational EM approach (TV-EM). TV-EM provides theoretical guarantees for learning in generative networks, and its application to Neural Simpletrons results in particularly compact, yet approximately optimal, modifications of learning equations. If applied to standard benchmarks, we empirically find, that learning converges in fewer EM iterations, that the complexity per EM iteration is reduced, and that final likelihood values are higher on average. For the task of classification on data sets with few labels, learning improvements result in consistently lower error rates if compared to applications without truncation. Experiments on the MNIST data set herein allow for comparison to standard and state-of-the-art models in the semi-supervised setting. Further experiments on the NIST SD19 data set show the scalability of the approach when a manifold of additional unlabeled data is available

    Feed-forward Inhibitory Circuits in Hippocampus and Their Computational Role in Fragile X Syndrome

    Get PDF
    Feed-forward inhibitory (FFI) circuits are canonical neural microcircuits. They are unique in that they are comprised of excitation rapidly followed by a time-locked inhibition. This sequence provides for a powerful computational tool, but also a challenge in the analysis and study of these circuits. In this work, mechanisms and computations of two hippocampal FFI circuits have been examined. Specifically, the modulation of synaptic strength of the excitation and the inhibition is studied during constant-frequency and naturalistic stimulus patterns to reveal how FFI circuit properties and operations are dynamically modulated during ongoing activity. In the first part, the FFI circuit dysfunction in the mouse model of Fragile X syndrome, the leading genetic cause of autism, is explored. The balance between excitation and inhibition is found to be markedly abnormal in the Fmr1 KO mouse, leading to failure of FFI circuit to perform spike modulation tasks properly. The mechanisms underlying FFI circuit dysfunction are explored and a critical role of presynaptic GABAB receptors is revealed. Specifically, excessive presynaptic GABA receptor signaling is found to suppress GABA release in a subset of hippocampal interneurons leading to excitation/inhibition imbalance. In the second part, the dynamic changes during input bursts are explored both experimentally and in a simulated circuit. Because of the short-term synaptic plasticity of individual circuit components, the burst is found to play an important role in the modulating precision of the output cell spiking. The role of dynamics balance of excitation and inhibition during bursts in output spiking precision is further explored. Overall, the balance of excitation and inhibition is found to be critical for FFI circuit performance

    Learning of chunking sequences in cognition and behavior

    Get PDF
    We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson's disease and Schizophrenia

    Randomly weighted receptor inputs can explain the large diversity of colour-coding neurons in the bee visual system

    Get PDF
    True colour vision requires comparing the responses of different spectral classes of photoreceptors. In insects, there is a wealth of data available on the physiology of photoreceptors and on colour-dependent behaviour, but less is known about the neural mechanisms that link the two. The available information in bees indicates a diversity of colour opponent neurons in the visual optic ganglia that significantly exceeds that known in humans and other primates. Here, we present a simple mathematical model for colour processing in the optic lobes of bees to explore how this diversity might arise. We found that the model can reproduce the physiological spectral tuning curves of the 22 neurons that have been described so far. Moreover, the distribution of the presynaptic weights in the model suggests that colour-coding neurons are likely to be wired up to the receptor inputs randomly. The perceptual distances in our random synaptic weight model are in agreement with behavioural observations. Our results support the idea that the insect nervous system might adopt partially random wiring of neurons for colour processing
    corecore