6,625 research outputs found

    How Spinal Neural Networks Reduce Discrepancies between Motor Intention and Motor Realization

    Full text link
    This paper attempts a rational, step-by-step reconstruction of many aspects of the mammalian neural circuitry known to be involved in the spinal cord's regulation of opposing muscles acting on skeletal segments. Mathematical analyses and local circuit simulations based on neural membrane equations are used to clarify the behavioral function of five fundamental cell types, their complex connectivities, and their physiological actions. These cell types are: α-MNs, γ-MNs, IaINs, IbINs, and Renshaw cells. It is shown that many of the complexities of spinal circuitry are necessary to ensure near invariant realization of motor intentions when descending signals of two basic types independently vary over large ranges of magnitude and rate of change. Because these two types of signal afford independent control, or Factorization, of muscle LEngth and muscle TEnsion, our construction was named the FLETE model (Bullock and Grossberg, 1988b, 1989). The present paper significantly extends the range of experimental data encompassed by this evolving model.National Science Foundation (IRI-87-16960, IRI-90-24877); Instituto Tecnológico y de Estudios Superiores de Monterre

    Deterministic networks for probabilistic computing

    Get PDF
    Neural-network models of high-level brain functions such as memory recall and reasoning often rely on the presence of stochasticity. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. However, both in vivo and in silico, the number of noise sources is limited due to space and bandwidth constraints. Hence, neurons in large networks usually need to share noise sources. Here, we show that the resulting shared-noise correlations can significantly impair the performance of stochastic network models. We demonstrate that this problem can be overcome by using deterministic recurrent neural networks as sources of uncorrelated noise, exploiting the decorrelating effect of inhibitory feedback. Consequently, even a single recurrent network of a few hundred neurons can serve as a natural noise source for large ensembles of functional networks, each comprising thousands of units. We successfully apply the proposed framework to a diverse set of binary-unit networks with different dimensionalities and entropies, as well as to a network reproducing handwritten digits with distinct predefined frequencies. Finally, we show that the same design transfers to functional networks of spiking neurons.Comment: 22 pages, 11 figure

    Acetylcholine neuromodulation in normal and abnormal learning and memory: vigilance control in waking, sleep, autism, amnesia, and Alzheimer's disease

    Get PDF
    This article provides a unified mechanistic neural explanation of how learning, recognition, and cognition break down during Alzheimer's disease, medial temporal amnesia, and autism. It also clarifies whey there are often sleep disturbances during these disorders. A key mechanism is how acetylcholine modules vigilance control in cortical layer

    Neural Dynamics of Learning and Performance of Fixed Sequences: Latency Pattern Reorganizations and the N-STREAMS Model

    Full text link
    Fixed sequences performed from memory play a key role in human cultural behavior, especially in music and in rapid communication through speaking, handwriting, and typing. Upon first performance, fixed sequences are often produced slowly, but extensive practice leads to performance that is both fluid and as rapid as allowed by constraints inherent in the task or the performer. The experimental study of fixed sequence learning and production has generated a large database with some challenging findings, including practice-related reorganizations of temporal properties of performance. In this paper, we analyze this literature and identify a coherent set of robust experimental effects. Among these are both the sequence length effect on latency, a dependence of reaction time on sequence length, and practice-dependent lost of the lengths effect on latency. We then introduce a neural network architecture capable of explaining these effects. Called the NSTREAMS model, this multi-module architecture embodies the hypothesis that the brain uses several substrates for serial order representation and learning. The theory describes three such substrates and how learning autonomously modifies their interaction over the course of practice. A key feature of the architecture is the co-operation of a 'competitive queuing' performance mechanism with both fundamentally parallel ('priority-tagged') and fundamentally sequential ('chain-like') representations of serial order. A neurobiological interpretation of the architecture suggests how different parts of the brain divide the labor for serial learning and performance. Rhodes (1999) presents a complete mathematical model as implementation of the architecture, and reports successful simulations of the major experimental effects. It also highlights how the network mechanisms incorporated in the architecture compare and contrast with earlier substrates proposed for competitive queuing, priority tagging and response chaining.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-92-J-1309, N00014-93-1-1364, N00014-95-1-0409); National Institute of Health (RO1 DC02852

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset

    Training deep neural density estimators to identify mechanistic models of neural dynamics

    Get PDF
    Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-- trained using model simulations-- to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features, and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics

    Formal Modeling of Connectionism using Concurrency Theory, an Approach Based on Automata and Model Checking

    Get PDF
    This paper illustrates a framework for applying formal methods techniques, which are symbolic in nature, to specifying and verifying neural networks, which are sub-symbolic in nature. The paper describes a communicating automata [Bowman & Gomez, 2006] model of neural networks. We also implement the model using timed automata [Alur & Dill, 1994] and then undertake a verification of these models using the model checker Uppaal [Pettersson, 2000] in order to evaluate the performance of learning algorithms. This paper also presents discussion of a number of broad issues concerning cognitive neuroscience and the debate as to whether symbolic processing or connectionism is a suitable representation of cognitive systems. Additionally, the issue of integrating symbolic techniques, such as formal methods, with complex neural networks is discussed. We then argue that symbolic verifications may give theoretically well-founded ways to evaluate and justify neural learning systems in the field of both theoretical research and real world applications

    Learning of Temporal Motor Patterns: An Analysis of Continuous Versus Reset Timing

    Get PDF
    Our ability to generate well-timed sequences of movements is critical to an array of behaviors, including the ability to play a musical instrument or a video game. Here we address two questions relating to timing with the goal of better understanding the neural mechanisms underlying temporal processing. First, how does accuracy and variance change over the course of learning of complex spatiotemporal patterns? Second, is the timing of sequential responses most consistent with starting and stopping an internal timer at each interval or with continuous timing? To address these questions we used a psychophysical task in which subjects learned to reproduce a sequence of finger taps in the correct order and at the correct times – much like playing a melody at the piano. This task allowed us to calculate the variance of the responses at different time points using data from the same trials. Our results show that while “standard” Weber’s law is clearly violated, variance does increase as a function of time squared, as expected according to the generalized form of Weber’s law – which separates the source of variance into time-dependent and time-independent components. Over the course of learning, both the time-independent variance and the coefficient of the time-dependent term decrease. Our analyses also suggest that timing of sequential events does not rely on the resetting of an internal timer at each event. We describe and interpret our results in the context of computer simulations that capture some of our psychophysical findings. Specifically, we show that continuous timing, as opposed to “reset” timing, is consistent with “population clock” models in which timing emerges from the internal dynamics of recurrent neural networks
    corecore