1,728 research outputs found

    Systems level circuit model of C. elegans undulatory locomotion: mathematical modeling and molecular genetics

    Get PDF
    To establish the relationship between locomotory behavior and dynamics of neural circuits in the nematode C. elegans we combined molecular and theoretical approaches. In particular, we quantitatively analyzed the motion of C. elegans with defective synaptic GABA and acetylcholine transmission, defective muscle calcium signaling, and defective muscles and cuticle structures, and compared the data with our systems level circuit model. The major experimental findings are: (i) anterior-to-posterior gradients of body bending flex for almost all strains both for forward and backward motion, and for neuronal mutants, also analogous weak gradients of undulatory frequency, (ii) existence of some form of neuromuscular (stretch receptor) feedback, (iii) invariance of neuromuscular wavelength, (iv) biphasic dependence of frequency on synaptic signaling, and (v) decrease of frequency with increase of the muscle time constant. Based on (i) we hypothesize that the Central Pattern Generator (CPG) is located in the head both for forward and backward motion. Points (i) and (ii) are the starting assumptions for our theoretical model, whose dynamical patterns are qualitatively insensitive to the details of the CPG design if stretch receptor feedback is sufficiently strong and slow. The model reveals that stretch receptor coupling in the body wall is critical for generation of the neuromuscular wave. Our model agrees with our behavioral data(iii), (iv), and (v), and with other pertinent published data, e.g., that frequency is an increasing function of muscle gap-junction coupling.Comment: Neural control of C. elegans motion with genetic perturbation

    A statistical method for revealing form-function relations in biological networks

    Get PDF
    Over the past decade, a number of researchers in systems biology have sought to relate the function of biological systems to their network-level descriptions -- lists of the most important players and the pairwise interactions between them. Both for large networks (in which statistical analysis is often framed in terms of the abundance of repeated small subgraphs) and for small networks which can be analyzed in greater detail (or even synthesized in vivo and subjected to experiment), revealing the relationship between the topology of small subgraphs and their biological function has been a central goal. We here seek to pose this revelation as a statistical task, illustrated using a particular setup which has been constructed experimentally and for which parameterized models of transcriptional regulation have been studied extensively. The question "how does function follow form" is here mathematized by identifying which topological attributes correlate with the diverse possible information-processing tasks which a transcriptional regulatory network can realize. The resulting method reveals one form-function relationship which had earlier been predicted based on analytic results, and reveals a second for which we can provide an analytic interpretation. Resulting source code is distributed via http://formfunction.sourceforge.net.Comment: To appear in Proc. Natl. Acad. Sci. USA. 17 pages, 9 figures, 2 table

    Intrinsically-generated fluctuating activity in excitatory-inhibitory networks

    Get PDF
    Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsi- cally generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynami- cal regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations, for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays

    Get PDF
    The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry—also known as “open-loop feedback”—, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain

    Fundamental activity constraints lead to specific interpretations of the connectome

    Get PDF
    The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.Comment: J. Schuecker and M. Schmidt contributed equally to this wor

    Dopamine-modulated dynamic cell assemblies generated by the GABAergic striatal microcircuit

    Get PDF
    The striatum, the principal input structure of the basal ganglia, is crucial to both motor control and learning. It receives convergent input from all over the neocortex, hippocampal formation, amygdala and thalamus, and is the primary recipient of dopamine in the brain. Within the striatum is a GABAergic microcircuit that acts upon these inputs, formed by the dominant medium-spiny projection neurons (MSNs) and fast-spiking interneurons (FSIs). There has been little progress in understanding the computations it performs, hampered by the non-laminar structure that prevents identification of a repeating canonical microcircuit. We here begin the identification of potential dynamically-defined computational elements within the striatum. We construct a new three-dimensional model of the striatal microcircuit's connectivity, and instantiate this with our dopamine-modulated neuron models of the MSNs and FSIs. A new model of gap junctions between the FSIs is introduced and tuned to experimental data. We introduce a novel multiple spike-train analysis method, and apply this to the outputs of the model to find groups of synchronised neurons at multiple time-scales. We find that, with realistic in vivo background input, small assemblies of synchronised MSNs spontaneously appear, consistent with experimental observations, and that the number of assemblies and the time-scale of synchronisation is strongly dependent on the simulated concentration of dopamine. We also show that feed-forward inhibition from the FSIs counter-intuitively increases the firing rate of the MSNs. Such small cell assemblies forming spontaneously only in the absence of dopamine may contribute to motor control problems seen in humans and animals following a loss of dopamine cells. (C) 2009 Elsevier Ltd. All rights reserved

    Significance of Input Correlations in Striatal Function

    Get PDF
    The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia

    Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

    Full text link
    Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). What are the neuronal mechanisms responsible for these changes and how does targeted stimulation by a BBCI shape population-level synaptic connectivity? The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites are strengthened for spike-stimulus delays consistent with experimentally derived spike time dependent plasticity (STDP) rules. However, the relationship between STDP mechanisms at the level of networks, and their modification with neural implants remains poorly understood. Using our model, we successfully reproduces key experimental results and use analytical derivations, along with novel experimental data. We then derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered stimulation in different regimes of cortical activity.Comment: 35 pages, 9 figure

    A low-dimensional model of binocular rivalry using winnerless competition

    Get PDF
    Copyright © 2010 Elsevier. NOTICE: this is the author’s version of a work that was accepted for publication in Physica D: Nonlinear Phenomena. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Physica D: Nonlinear Phenomena Vol. 239 (2010), DOI: 10.1016/j.physd.2009.06.018Notes: The article presents a novel biologically-inspired mathematical model of perceptual instability in binocular rivalry. I took part in the development of the model in the relation to extant models of binocular rivalry, and wrote the introduction and the discussion sections of the paper. Peter Ashwin ran the simulations and wrote the sections of the paper that present: the model in mathematical formalism, the results from simulations and the related mathematical proofs.We discuss a novel minimal model for binocular rivalry (and more generally perceptual dominance) effects. The model has only three state variables, but nonetheless exhibits a wide range of input and noise-dependent switching. The model has two reciprocally inhibiting input variables that represent perceptual processes active during the recognition of one of the two possible states and a third variable that represents the perceived output. Sensory inputs only affect the input variables. We observe, for rivalry-inducing inputs, the appearance of winnerless competition in the perceptual system. This gives rise to a behaviour that conforms to well-known principles describing binocular rivalry (the Levelt propositions, in particular proposition IV: monotonic response of residence time as a function of image contrast) down to very low levels of stimulus intensity
    corecore