1,615 research outputs found

    Neural Mechanisms of Working Memory Cortical Networks

    Get PDF
    This dissertation is aimed at understanding the cortical networks that maintain working memory information. By leveraging patterns of information degradation in spatial working memory encoding we reveal new neural mechanisms that support working memory function and challenge existing models of working memory circuits. First we examine how interference from previous memoranda influences memory of a currently remembered location. We find that memory for a currently remembered location is biased toward the previously memorized location. This interference is graded, not all-or-none. Interference is strongest when the previous and current targets are close and activate overlapping populations of neurons. Contrary to the attractive behavioral bias, the neural representation of a currently remembered location in the frontal eye fields appears to be biased away from the previous target location, not toward it. We reconcile this discrepancy by proposing a model in which receptive fields of memory cells converge toward memorized locations. This reallocation of neural resources at task-relevant parts of space reduces overall error in the memory network but introduces systematic behavioral biases toward prior memoranda. We also find that attractive behavioral bias asymptotically increases as a function of the memory period length. Critically, the increase in bias depends only on the current trial’s memory period. That is, the effect of the previous target progressively increases in the current trial after that target’s memory has become irrelevant. We modeled this finding using a two-store model with a transient but unbiased visual sensory store and a sustained store with constant bias. Initially behavior is driven by the veridical visual sensory store and is therefore unbiased. As the visual sensory store decays in the current trial, behavioral responses are increasingly driven by the sustained but biased store, leading to an asymptotic increase of behavioral bias with increasing memory period length. Finally, we look at how memory activity is encoded over long (15 second) memory periods. Memory cells tend to turn on early in the memory period and stay active for a fixed amount of time. Most memory cells shut off prior to the end of the memory period. Within each cell, offset times are repeatable from one trial to the next. Across cells, offset times are broadly distributed throughout the entire memory period. Once a cell shuts off, it remains off for the rest of the memory period. On the one hand, these findings challenge the leading model for working memory, the attractor network framework, which predicts a single homogenous time course from all cells. On the other hand, the findings also show that the patterns of activity seen in memory circuits are much more structured than the heterogeneous patterns suggested by the leading competitors to the attractor models. Our findings are not predicted by current models of working memory circuits and indicate that new network models need to be developed

    Neural Mechanisms of Working Memory Cortical Networks

    Get PDF
    This dissertation is aimed at understanding the cortical networks that maintain working memory information. By leveraging patterns of information degradation in spatial working memory encoding we reveal new neural mechanisms that support working memory function and challenge existing models of working memory circuits. First we examine how interference from previous memoranda influences memory of a currently remembered location. We find that memory for a currently remembered location is biased toward the previously memorized location. This interference is graded, not all-or-none. Interference is strongest when the previous and current targets are close and activate overlapping populations of neurons. Contrary to the attractive behavioral bias, the neural representation of a currently remembered location in the frontal eye fields appears to be biased away from the previous target location, not toward it. We reconcile this discrepancy by proposing a model in which receptive fields of memory cells converge toward memorized locations. This reallocation of neural resources at task-relevant parts of space reduces overall error in the memory network but introduces systematic behavioral biases toward prior memoranda. We also find that attractive behavioral bias asymptotically increases as a function of the memory period length. Critically, the increase in bias depends only on the current trial’s memory period. That is, the effect of the previous target progressively increases in the current trial after that target’s memory has become irrelevant. We modeled this finding using a two-store model with a transient but unbiased visual sensory store and a sustained store with constant bias. Initially behavior is driven by the veridical visual sensory store and is therefore unbiased. As the visual sensory store decays in the current trial, behavioral responses are increasingly driven by the sustained but biased store, leading to an asymptotic increase of behavioral bias with increasing memory period length. Finally, we look at how memory activity is encoded over long (15 second) memory periods. Memory cells tend to turn on early in the memory period and stay active for a fixed amount of time. Most memory cells shut off prior to the end of the memory period. Within each cell, offset times are repeatable from one trial to the next. Across cells, offset times are broadly distributed throughout the entire memory period. Once a cell shuts off, it remains off for the rest of the memory period. On the one hand, these findings challenge the leading model for working memory, the attractor network framework, which predicts a single homogenous time course from all cells. On the other hand, the findings also show that the patterns of activity seen in memory circuits are much more structured than the heterogeneous patterns suggested by the leading competitors to the attractor models. Our findings are not predicted by current models of working memory circuits and indicate that new network models need to be developed

    Hierarchically Clustered Adaptive Quantization CMAC and Its Learning Convergence

    Get PDF
    No abstract availabl

    Rhythmogenesis and Bifurcation Analysis of 3-Node Neural Network Kernels

    Get PDF
    Central pattern generators (CPGs) are small neural circuits of coupled cells stably producing a range of multiphasic coordinated rhythmic activities like locomotion, heartbeat, and respiration. Rhythm generation resulting from synergistic interaction of CPG circuitry and intrinsic cellular properties remains deficiently understood and characterized. Pairing of experimental and computational studies has proven key in unlocking practical insights into operational and dynamical principles of CPGs, underlining growing consensus that the same fundamental circuitry may be shared by invertebrates and vertebrates. We explore the robustness of synchronized oscillatory patterns in small local networks, revealing universal principles of rhythmogenesis and multi-functionality in systems capable of facilitating stability in rhythm formation. Understanding principles leading to functional neural network behavior benefits future study of abnormal neurological diseases that result from perturbations of mechanisms governing normal rhythmic states. Qualitative and quantitative stability analysis of a family of reciprocally coupled neural circuits, constituted of generalized Fitzhugh–Nagumo neurons, explores symmetric and asymmetric connectivity within three-cell motifs, often forming constituent kernels within larger networks. Intrinsic mechanisms of synaptic release, escape, and post-inhibitory rebound lead to differing polyrhythmicity, where a single parameter or perturbation may trigger rhythm switching in otherwise robust networks. Bifurcation analysis and phase reduction methods elucidate qualitative changes in rhythm stability, permitting rapid identification and exploration of pivotal parameters describing biologically plausible network connectivity. Additional rhythm outcomes are elucidated, including phase-varying lags and broader cyclical behaviors, helping to characterize system capability and robustness reproducing experimentally observed outcomes. This work further develops a suite of visualization approaches and computational tools, describing robustness of network rhythmogenesis and disclosing principles for neuroscience applicable to other systems beyond motor-control. A framework for modular organization is introduced, using inhibitory and electrical synapses to couple well-characterized 3-node motifs described in this research as building blocks within larger networks to describe underlying cooperative mechanisms

    How to Control Hydrodynamic Force on Fluidic Pinball via Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning (DRL) for fluidic pinball, three individually rotating cylinders in the uniform flow arranged in an equilaterally triangular configuration, can learn the efficient flow control strategies due to the validity of self-learning and data-driven state estimation for complex fluid dynamic problems. In this work, we present a DRL-based real-time feedback strategy to control the hydrodynamic force on fluidic pinball, i.e., force extremum and tracking, from cylinders' rotation. By adequately designing reward functions and encoding historical observations, and after automatic learning of thousands of iterations, the DRL-based control was shown to make reasonable and valid control decisions in nonparametric control parameter space, which is comparable to and even better than the optimal policy found through lengthy brute-force searching. Subsequently, one of these results was analyzed by a machine learning model that enabled us to shed light on the basis of decision-making and physical mechanisms of the force tracking process. The finding from this work can control hydrodynamic force on the operation of fluidic pinball system and potentially pave the way for exploring efficient active flow control strategies in other complex fluid dynamic problems
    • …
    corecore