51,686 research outputs found

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Large Margin Neural Language Model

    Full text link
    We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the "good" and "bad" sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.Comment: 9 pages. Accepted as a long paper in EMNLP201

    Neurogenesis Drives Stimulus Decorrelation in a Model of the Olfactory Bulb

    Get PDF
    The reshaping and decorrelation of similar activity patterns by neuronal networks can enhance their discriminability, storage, and retrieval. How can such networks learn to decorrelate new complex patterns, as they arise in the olfactory system? Using a computational network model for the dominant neural populations of the olfactory bulb we show that fundamental aspects of the adult neurogenesis observed in the olfactory bulb -- the persistent addition of new inhibitory granule cells to the network, their activity-dependent survival, and the reciprocal character of their synapses with the principal mitral cells -- are sufficient to restructure the network and to alter its encoding of odor stimuli adaptively so as to reduce the correlations between the bulbar representations of similar stimuli. The decorrelation is quite robust with respect to various types of perturbations of the reciprocity. The model parsimoniously captures the experimentally observed role of neurogenesis in perceptual learning and the enhanced response of young granule cells to novel stimuli. Moreover, it makes specific predictions for the type of odor enrichment that should be effective in enhancing the ability of animals to discriminate similar odor mixtures

    Temporal Dynamics of Binocular Disparity Processing with Corticogeniculate Interactions

    Full text link
    A neural model is developed to probe how corticogeniculate feedback may contribute to the dynamics of binocular vision. Feedforward and feedback interactions among retinal, lateral geniculate, and cortical simple and complex cells are used to simulate psychophysical and neurobiological data concerning the dynamics of binocular disparity processing, including correct registration of disparity in response to dynamically changing stimuli, binocular summation of weak stimuli, and fusion of anticorrelated stimuli when they are delayed, but not when they are simultaneous. The model exploits dynamic rebounds between opponent ON and OFF cells that are due to imbalances in habituative transmitter gates. It shows how corticogeniculate feedback can carry out a top-down matching process that inhibits incorrect disparity response and reduces persistence of previously correct responses to dynamically changing displays.Air Force Office of scientific Research (F49620-92-J-0499, F49620-92-J-0334, F49620-92-J-0225); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-92-J-4015); Natioanl Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-0657

    Complexity without chaos: Plasticity within random recurrent networks generates robust timing and motor control

    Get PDF
    It is widely accepted that the complex dynamics characteristic of recurrent neural circuits contributes in a fundamental manner to brain function. Progress has been slow in understanding and exploiting the computational power of recurrent dynamics for two main reasons: nonlinear recurrent networks often exhibit chaotic behavior and most known learning rules do not work in robust fashion in recurrent networks. Here we address both these problems by demonstrating how random recurrent networks (RRN) that initially exhibit chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity that are both complex and robust to noise. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increases the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset
    • …
    corecore