23 research outputs found
full-FORCE: A Target-Based Method for Training Recurrent Networks
Trained recurrent networks are powerful tools for modeling dynamic neural
computations. We present a target-based method for modifying the full
connectivity matrix of a recurrent network to train it to perform tasks
involving temporally complex input/output transformations. The method
introduces a second network during training to provide suitable "target"
dynamics useful for performing the task. Because it exploits the full recurrent
connectivity, the method produces networks that perform tasks with fewer
neurons and greater noise robustness than traditional least-squares (FORCE)
approaches. In addition, we show how introducing additional input signals into
the target-generating network, which act as task hints, greatly extends the
range of tasks that can be learned and provides control over the complexity and
nature of the dynamics of the trained, task-performing network.Comment: 20 pages, 8 figure
Remembrance of things practiced with fast and slow learning in cortical and subcortical pathways
13 pagesThe learning of motor skills unfolds over multiple timescales, with rapid initial gains in performance followed by a longer period in which the behavior becomes more refined, habitual, and automatized. While recent lesion and inactivation experiments have provided hints about how various brain areas might contribute to such learning, their precise roles and the neural mechanisms underlying them are not well understood. In this work, we propose neural- and circuit-level mechanisms by which motor cortex, thalamus, and striatum support motor learning. In this model, the combination of fast cortical learning and slow subcortical learning gives rise to a covert learning process through which control of behavior is gradually transferred from cortical to subcortical circuits, while protecting learned behaviors that are practiced repeatedly against overwriting by future learning. Together, these results point to a new computational role for thalamus in motor learning and, more broadly, provide a framework for understanding the neural basis of habit formation and the automatization of behavior through practice
Neuromatch Academy: Teaching Computational Neuroscience with global accessibility
Neuromatch Academy designed and ran a fully online 3-week Computational
Neuroscience summer school for 1757 students with 191 teaching assistants
working in virtual inverted (or flipped) classrooms and on small group
projects. Fourteen languages, active community management, and low cost allowed
for an unprecedented level of inclusivity and universal accessibility.Comment: 10 pages, 3 figures. Equal contribution by the executive committee
members of Neuromatch Academy: Tara van Viegen, Athena Akrami, Kate Bonnen,
Eric DeWitt, Alexandre Hyafil, Helena Ledmyr, Grace W. Lindsay, Patrick
Mineault, John D. Murray, Xaq Pitkow, Aina Puce, Madineh Sedigh-Sarvestani,
Carsen Stringer. and equal contribution by the board of directors of
Neuromatch Academy: Gunnar Blohm, Konrad Kording, Paul Schrater, Brad Wyble,
Sean Escola, Megan A. K. Peter
Neuromatch Academy: a 3-week, online summer school in computational neuroscience
Neuromatch Academy (https://academy.neuromatch.io; (van Viegen et al., 2021)) was designed as an online summer school to cover the basics of computational neuroscience in three weeks. The materials cover dominant and emerging computational neuroscience tools, how they complement one another, and specifically focus on how they can help us to better understand how the brain functions. An original component of the materials is its focus on modeling choices, i.e. how do we choose the right approach, how do we build models, and how can we evaluate models to determine if they provide real (meaningful) insight. This meta-modeling component of the instructional materials asks what questions can be answered by different techniques, and how to apply them meaningfully to get insight about brain function