186 research outputs found

    From Parallel Sequence Representations to Calligraphic Control: A Conspiracy of Neural Circuits

    Full text link
    Calligraphic writing presents a rich set of challenges to the human movement control system. These challenges include: initial learning, and recall from memory, of prescribed stroke sequences; critical timing of stroke onsets and durations; fine control of grip and contact forces; and letter-form invariance under voluntary size scaling, which entails fine control of stroke direction and amplitude during recruitment and derecruitment of musculoskeletal degrees of freedom. Experimental and computational studies in behavioral neuroscience have made rapid progress toward explaining the learning, planning and contTOl exercised in tasks that share features with calligraphic writing and drawing. This article summarizes computational neuroscience models and related neurobiological data that reveal critical operations spanning from parallel sequence representations to fine force control. Part one addresses stroke sequencing. It treats competitive queuing (CQ) models of sequence representation, performance, learning, and recall. Part two addresses letter size scaling and motor equivalence. It treats cursive handwriting models together with models in which sensory-motor tmnsformations are performed by circuits that learn inverse differential kinematic mappings. Part three addresses fine-grained control of timing and transient forces, by treating circuit models that learn to solve inverse dynamics problems.National Institutes of Health (R01 DC02852

    Investigation of sequence processing: A cognitive and computational neuroscience perspective

    Get PDF
    Serial order processing or sequence processing underlies many human activities such as speech, language, skill learning, planning, problem-solving, etc. Investigating the neural bases of sequence processing enables us to understand serial order in cognition and also helps in building intelligent devices. In this article, we review various cognitive issues related to sequence processing with examples. Experimental results that give evidence for the involvement of various brain areas will be described. Finally, a theoretical approach based on statistical models and reinforcement learning paradigm is presented. These theoretical ideas are useful for studying sequence learning in a principled way. This article also suggests a two-way process diagram integrating experimentation (cognitive neuroscience) and theory/ computational modelling (computational neuroscience). This integrated framework is useful not only in the present study of serial order, but also for understanding many cognitive processes

    Introduction. Modelling natural action selection

    Get PDF
    Action selection is the task of resolving conflicts between competing behavioural alternatives. This theme issue is dedicated to advancing our understanding of the behavioural patterns and neural substrates supporting action selection in animals, including humans. The scope of problems investigated includes: (i) whether biological action selection is optimal (and, if so, what is optimized), (ii) the neural substrates for action selection in the vertebrate brain, (iii) the role of perceptual selection in decision-making, and (iv) the interaction of group and individual action selection. A second aim of this issue is to advance methodological practice with respect to modelling natural action section. A wide variety of computational modelling techniques are therefore employed ranging from formal mathematical approaches through to computational neuroscience, connectionism and agent-based modelling. The research described has broad implications for both natural and artificial sciences. One example, highlighted here, is its application to medical science where models of the neural substrates for action selection are contributing to the understanding of brain disorders such as Parkinson's disease, schizophrenia and attention deficit/hyperactivity disorder. Action selection is the task of resolving conflicts between competing behavioural alternatives, or, more simply put, of deciding ‘what to do next’. As a general problem facing all autonomous beings—animals and artificial agents—it has exercised the minds of scientists from many disciplines: those concerned with understanding the biological bases of behaviour (ethology, neurobiology and psychology) and those concerned with building artefacts, real or simulated, that behave appropriately in complex worlds (artificial intelligence, artificial life and robotics). Work in these different domains has established a wide variety of methodologies that address the same underlying problems from different perspectives. One approach to characterizing this multiplicity of methods is to distinguish between the analytical and the synthetic branches of the behavioural and brain sciences (Braitenberg 1986). From the perspective of analytical science, an important goal is to describe transitions in behaviour; these can occur at many different temporal scales, and can be considered as instances of ‘behavioural switching’ or, more anthropomorphically, as ‘choice points’. Analytical approaches also seek to identify the biological substrates that give rise to such transitions, for instance, by probing in the nervous system to find critical components—candidate action-selection mechanisms—on which effective and appropriate switching may depend. Beyond such descriptions, of course, a central goal of behavioural science is to explain why any observed transition (or sequence of transitions) occurs in a given context, perhaps referencing such explanation to normative concepts such as ‘utility’ or ‘fitness’. These explanations may also make use of mechanistic accounts that explain how underlying neural control systems operate to generate observed behavioural outcomes. It is at the confluence of these mechanistic and normative approaches that the synthetic approach in science is coming to have an increasing influence. The experimentalist seeks the help of the mathematician or engineer and asks ‘what would it take to build a system that acts in this way?’ Modelling—the synthesis of artificial systems that mimic natural ones—has always played an important role in biology; however, the last few decades have seen a dramatic expansion in the range of modelling methodologies that have been employed. Formal, mathematical models with provable properties continue to be of great importance (e.g. Bogacz et al. 2007; Houston et al. 2007). Now, added to these, there is a burgeoning interest in larger-scale simulations that allow the investigation of systems for which formal mathematical solutions are, as a result of their complexity, either intractable or simply unknown. However, synthetic models, once built, may often be elucidated by analytical techniques; thus synthetic and analytical approaches are best pursued jointly. Analysis of a formally intractable simulation often consists of observing the system's behaviour then measuring and describing it using many of the same tools as traditional experimental science (Bryson et al. 2007). Such an analysis can serve to uncover heuristics for the interpretation of empirical data as well as to generate novel hypotheses to be tested experimentally. The questions to be addressed in considering models of action selection include: is the model sufficiently constrained by biological data that its functioning can capture interesting properties of the natural system of interest? Do manipulations of the model, intended to mirror scientific procedures or observed natural processes, result in similar outcomes to those seen in real life? Does the model make predictions? Is the model more complex than it needs to be in order to describe a phenomenon, or is it too simple to engage with empirical data? A potential pitfall of more detailed computational models is that they may trade the sophistication with which they match biological detail with comprehensibility. The scientist is then left with two systems, one natural and the other synthesized, neither of which is well understood. Hence, the best models hit upon a good trade-off between accurately mimicking key properties of a target biological system at the same time as remaining understandable to the extent that new insights into the natural world are generated. In this theme issue, we present a selection of some of the most promising contemporary approaches to modelling action selection in natural systems. The range of methodologies is broad—from formal mathematical models, through to models of artificial animals, here called agents, embedded in simulated worlds (often containing other agents). We also consider mechanistic accounts of the neural processes underlying action selection through a variety of computational neuroscience and connectionist approaches. In this article, we summarize the main substantive areas of this theme issue and the contributions of each article and then return briefly to a discussion of the modelling techniques

    Learning and Production of Movement Sequences: Behavioral, Neurophysiological, and Modeling Perspectives

    Full text link
    A growing wave of behavioral studies, using a wide variety of paradigms that were introduced or greatly refined in recent years, has generated a new wealth of parametric observations about serial order behavior. What was a mere trickle of neurophysiological studies has grown to a more steady stream of probes of neural sites and mechanisms underlying sequential behavior. Moreover, simulation models of serial behavior generation have begun to open a channel to link cellular dynamics with cognitive and behavioral dynamics. Here we summarize the major results from prominent sequence learning and performance tasks, namely immediate serial recall, typing, 2XN, discrete sequence production, and serial reaction time. These populate a continuum from higher to lower degrees of internal control of sequential organization. The main movement classes covered are speech and keypressing, both involving small amplitude movements that are very amenable to parametric study. A brief synopsis of classes of serial order models, vis-à-vis the detailing of major effects found in the behavioral data, leads to a focus on competitive queuing (CQ) models. Recently, the many behavioral predictive successes of CQ models have been joined by successful prediction of distinctively patterend electrophysiological recordings in prefrontal cortex, wherein parallel activation dynamics of multiple neural ensembles strikingly matches the parallel dynamics predicted by CQ theory. An extended CQ simulation model-the N-STREAMS neural network model-is then examined to highlight issues in ongoing attemptes to accomodate a broader range of behavioral and neurophysiological data within a CQ-consistent theory. Important contemporary issues such as the nature of working memory representations for sequential behavior, and the development and role of chunks in hierarchial control are prominent throughout.Defense Advanced Research Projects Agency/Office of Naval Research (N00014-95-1-0409); National Institute of Mental Health (R01 DC02852

    A biologically motivated synthesis of accumulator and reinforcement-learning models for describing adaptive decision-making

    Get PDF
    Cognitive process models, such as reinforcement learning (RL) and accumulator models of decision-making, have proven to be highly insightful tools for studying adaptive behaviors as well as their underlying neural substrates. Currently, however, two major barriers exist preventing these models from being applied in more complex settings: 1) the assumptions of most accumulator models break down for decisions involving more than two alternatives; 2) RL and accumulator models currently exist as separate frameworks, with no clear mapping between trial-to-trial learning and the dynamics of the decision process. Recently I showed how a modified accumulator model, premised off of the architecture of cortico-basal ganglia pathways, both predicts human decisions in uncertain situations and evoked activity in cortical and subcortical control circuits. Here I present a synthesis of RL and accumulator models that is motivated by recent evidence that the basal ganglia acts as a site for integrating trial-wise feedback from midbrain dopaminergic neurons with accumulating evidence from sensory and associative cortices. I show how this hybrid model can explain both adaptive go/no-go decisions and multi-alternative decisions in a computationally efficient manner. More importantly, by parameterizing the model to conform to various underlying assumptions about the architecture and physiology of basal ganglia pathways, model predictions can be rigorously tested against observed patterns in behavior as well as neural recordings. The result is a biologically-constrained and behaviorally tractable description of trial-to-trial learning effects on decision-making among multiple alternatives

    Hierarchical Reinforcement Learning in Behavior and the Brain

    Get PDF
    Dissertation presented to obtain the Ph.D degree in Biology, NeuroscienceReinforcement learning (RL) has provided key insights to the neurobiology of learning and decision making. The pivotal nding is that the phasic activity of dopaminergic cells in the ventral tegmental area during learning conforms to a reward prediction error (RPE), as speci ed in the temporal-di erence learning algorithm (TD). This has provided insights to conditioning, the distinction between habitual and goal-directed behavior, working memory, cognitive control and error monitoring. It has also advanced the understanding of cognitive de cits in Parkinson's disease, depression, ADHD and of personality traits such as impulsivity.(...

    A model of reversal learning and working memory in medicated and unmedicated patients with Parkinsons disease

    Get PDF
    Wepresent a neural network model of cognition in medicated and unmedicated patients with Parkinson’s disease (PD) in various learning and memory tasks. The model extends our prior models of the basal ganglia and PD with further modeling of the role of prefrontal cortex (PFC) dopamine in stimulus–response learning, reversal, and working memory. In our model, PD is associated with decreased dopamine levels in the basal ganglia and PFC, whereas dopamine medications increase dopamine levels in both brain structures. Simulation results suggest that dopamine medications impair stimulus–response learning in agreement with experimental data (Breitenstein et al., 2006; Gotham, Brown, & Marsden, 1988). Weshow how decreased dopamine levels in the PFC in unmedicated PD patients are associated with impaired working memory performance, as seen experimentally (Costa et al., 2003; Lange et al., 1992; Moustafa, Sherman, & Frank, 2008; Owen, Sahakian, Hodges, Summers, & Polkey, 1995). Further, our model simulations illustrate how increases in tonic dopamine levels in the PFC due to dopamine medications will enhance working memory, in accord with previous modeling and experimental results (Cohen, Braver, & Brown, 2002; Durstewitz, Seamans, & Sejnowski, 2000; Wang, Vijayraghavan, & Goldman-Rakic, 2004). The model is also consistent with data reported in Cools, Barker, Sahakian, and Robbins (2001), who showed that dopamine medications impair reversal learning. In addition, our model shows that extended training of the reversal phase leads to enhanced reversal performance in medicated PD patients, which is a new, and as yet untested, prediction of the model. Overall, our model provides a unified account for performance in various behavioral tasks using common computational principles.Research reported in this publication was supported by National Institutes of Health Award 1 P50 NS 071675-02 from the National Institute of Neurological Disorders and Stroke and by a 2013 internal UWS Research Grant Scheme award P00021210 to A.A.M

    Neurocomputational Methods for Autonomous Cognitive Control

    Get PDF
    Artificial Intelligence can be divided between symbolic and sub-symbolic methods, with neural networks making up a majority of the latter. Symbolic systems have the advantage when capabilities such as deduction and planning are required, while sub-symbolic ones are preferable for tasks requiring skills such as perception and generalization. One of the domains in which neural approaches tend to fare poorly is cognitive control: maintaining short-term memory, inhibiting distractions, and shifting attention. Our own biological neural networks are more than capable of these sorts of executive functions, but artificial neural networks struggle with them. This work explores the gap between the cognitive control that is possible with both symbolic AI systems and biological neural networks, but not with artificial neural networks. To do so, I identify a set of general-purpose, regional-level functions and interactions that are useful for cognitive control in large-scale neural architectures. My approach has three main pillars: a region-and-pathway architecture inspired by the human cerebral cortex and biologically-plausible Hebbian learning, neural regions that each serve as an attractor network able to learn sequences, and neural regions that not only learn to exchange information but also to modulate the functions of other regions. The resultant networks have behaviors based on their own memory contents rather than exclusively on their structure. Because they learn not just memories of the environment but also procedures for tasks, it is possible to "program" these neural networks with the desired behaviors. This research makes four primary contributions. First, the extension of Hopfield-like attractor networks from processing only fixed-point attractors to processing sequential ones. This is accomplished via the introduction of temporally asymmetric weights to Hopfield-like networks, a novel technique that I developed. Second, the combination of several such networks to create models capable of autonomously directing their own performance of cognitive control tasks. By learning procedural memories for a task they can perform in ways that match those of human subjects in key respects. Third, the extension of this approach to spatial domains, binding together visuospatial data to perform a complex memory task at the same level observed in humans and a comparable symbolic model. Finally, these new memories and learning procedures are integrated so that models can respond to feedback from the environment. This enables them to improve as they gain experience by refining their own internal representations of their instructions. These results establish that the use of regional networks, sequential attractor dynamics, and gated connections provide an effective way to accomplish the difficult task of neurally-based cognitive control
    • …
    corecore