2,670 research outputs found

    Inside the brain of an elite athlete: The neural processes that support high achievement in sports

    Get PDF
    Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance

    Neuronal Activity in the Human Subthalamic Nucleus Encodes Decision Conflict during Action Selection

    Get PDF
    The subthalamic nucleus (STN), which receives excitatory inputs from the cortex and has direct connections with the inhibitory pathways\ud of the basal ganglia, is well positioned to efficiently mediate action selection. Here, we use microelectrode recordings captured during\ud deep brain stimulation surgery as participants engage in a decision task to examine the role of the human STN in action selection. We\ud demonstrate that spiking activity in the STN increases when participants engage in a decision and that the level of spiking activity\ud increases with the degree of decision conflict. These data implicate the STN as an important mediator of action selection during decision\ud processes.\u

    A biologically inspired meta-control navigation system for the Psikharpax rat robot

    Get PDF
    A biologically inspired navigation system for the mobile rat-like robot named Psikharpax is presented, allowing for self-localization and autonomous navigation in an initially unknown environment. The ability of parts of the model (e. g. the strategy selection mechanism) to reproduce rat behavioral data in various maze tasks has been validated before in simulations. But the capacity of the model to work on a real robot platform had not been tested. This paper presents our work on the implementation on the Psikharpax robot of two independent navigation strategies (a place-based planning strategy and a cue-guided taxon strategy) and a strategy selection meta-controller. We show how our robot can memorize which was the optimal strategy in each situation, by means of a reinforcement learning algorithm. Moreover, a context detector enables the controller to quickly adapt to changes in the environment-recognized as new contexts-and to restore previously acquired strategy preferences when a previously experienced context is recognized. This produces adaptivity closer to rat behavioral performance and constitutes a computational proposition of the role of the rat prefrontal cortex in strategy shifting. Moreover, such a brain-inspired meta-controller may provide an advancement for learning architectures in robotics

    Dopamine and Effort-Based Decision Making

    Get PDF
    Motivational theories of choice focus on the influence of goal values and strength of reinforcement to explain behavior. By contrast relatively little is known concerning how the cost of an action, such as effort expended, contributes to a decision to act. Effort-based decision making addresses how we make an action choice based on an integration of action and goal values. Here we review behavioral and neurobiological data regarding the representation of effort as action cost, and how this impacts on decision making. Although organisms expend effort to obtain a desired reward there is a striking sensitivity to the amount of effort required, such that the net preference for an action decreases as effort cost increases. We discuss the contribution of the neurotransmitter dopamine (DA) toward overcoming response costs and in enhancing an animal's motivation toward effortful actions. We also consider the contribution of brain structures, including the basal ganglia and anterior cingulate cortex, in the internal generation of action involving a translation of reward expectation into effortful action

    Hierarchical control over effortful behavior by rodent medial frontal cortex : a computational model

    Get PDF
    The anterior cingulate cortex (ACC) has been the focus of intense research interest in recent years. Although separate theories relate ACC function variously to conflict monitoring, reward processing, action selection, decision making, and more, damage to the ACC mostly spares performance on tasks that exercise these functions, indicating that they are not in fact unique to the ACC. Further, most theories do not address the most salient consequence of ACC damage: impoverished action generation in the presence of normal motor ability. In this study we develop a computational model of the rodent medial prefrontal cortex that accounts for the behavioral sequelae of ACC damage, unifies many of the cognitive functions attributed to it, and provides a solution to an outstanding question in cognitive control research-how the control system determines and motivates what tasks to perform. The theory derives from recent developments in the formal study of hierarchical control and learning that highlight computational efficiencies afforded when collections of actions are represented based on their conjoint goals. According to this position, the ACC utilizes reward information to select tasks that are then accomplished through top-down control over action selection by the striatum. Computational simulations capture animal lesion data that implicate the medial prefrontal cortex in regulating physical and cognitive effort. Overall, this theory provides a unifying theoretical framework for understanding the ACC in terms of the pivotal role it plays in the hierarchical organization of effortful behavior

    Reinforcement Learning Embedded in Brains and Robots

    Get PDF
    In many ways and in various tasks, computers are able to outperform humans. They can store and retrieve much larger amounts of data or even beat humans at chess. However, when looking at robots they are still far behind even a small child in terms of their performance capabilities. Even a sophisticated robot, such as ASIMO, is limited to mostl

    Decision Making Under Uncertainty: A Neural Model Based on Partially Observable Markov Decision Processes

    Get PDF
    A fundamental problem faced by animals is learning to select actions based on noisy sensory information and incomplete knowledge of the world. It has been suggested that the brain engages in Bayesian inference during perception but how such probabilistic representations are used to select actions has remained unclear. Here we propose a neural model of action selection and decision making based on the theory of partially observable Markov decision processes (POMDPs). Actions are selected based not on a single “optimal” estimate of state but on the posterior distribution over states (the “belief” state). We show how such a model provides a unified framework for explaining experimental results in decision making that involve both information gathering and overt actions. The model utilizes temporal difference (TD) learning for maximizing expected reward. The resulting neural architecture posits an active role for the neocortex in belief computation while ascribing a role to the basal ganglia in belief representation, value computation, and action selection. When applied to the random dots motion discrimination task, model neurons representing belief exhibit responses similar to those of LIP neurons in primate neocortex. The appropriate threshold for switching from information gathering to overt actions emerges naturally during reward maximization. Additionally, the time course of reward prediction error in the model shares similarities with dopaminergic responses in the basal ganglia during the random dots task. For tasks with a deadline, the model learns a decision making strategy that changes with elapsed time, predicting a collapsing decision threshold consistent with some experimental studies. The model provides a new framework for understanding neural decision making and suggests an important role for interactions between the neocortex and the basal ganglia in learning the mapping between probabilistic sensory representations and actions that maximize rewards

    The unfolding action model of initiation times, movement times, and movement paths

    Get PDF
    Converging evidence has led to a consensus in favor of computational models of behavior implementing continuous information flow and parallel processing between cognitive processing stages. Yet, such models still typically implement a discrete step between the last cognitive stage and motor implementation. This discrete step is implemented as a fixed decision bound that activation in the last cognitive stage needs to cross before action can be initiated. Such an implementation is questionable as it cannot account for two important features of behavior. First, it does not allow to select an action while withholding it until the moment is appropriate for executing it. Second, it cannot account for recent evidence that cognition is not confined prior to movement initiation, but consistently leaks into movement. To address these two features, we propose a novel neurocomputational model of cognitionaction interactions, namely the unfolding action model (UAM). Crucially, the model implements adaptive information flow between the last cognitive processing stage and motor implementation. We show that the UAM addresses the two abovementioned features. Empirically, the UAM accounts for traditional response time data, including positively skewed initiation time distribution, functionally fixed decision bounds and speed-accuracy trade-offs in button-press experimental designs. Moreover, it accounts for movement times, movement paths, and how they are influenced by cognitive-experimental manipulations. This move should close the current gap between abstract decision-making models and behavior observed in natural habitats.SCOPUS: ar.jinfo:eu-repo/semantics/publishe
    • …
    corecore