6 research outputs found

    Simulating future value in intertemporal choice

    Get PDF
    The laboratory study of how humans and other animals trade-off value and time has a long and storied history, and is the subject of a vast literature. However, despite a long history of study, there is no agreed upon mechanistic explanation of how intertemporal choice preferences arise. Several theorists have recently proposed model-based reinforcement learning as a candidate framework. This framework describes a suite of algorithms by which a model of the environment, in the form of a state transition function and reward function, can be converted on-line into a decision. The state transition function allows the model-based system to make decisions based on projected future states, while the reward function assigns value to each state, together capturing the necessary components for successful intertemporal choice. Empirical work has also pointed to a possible relationship between increased prospection and reduced discounting. In the current paper, we look for direct evidence of a relationship between temporal discounting and model-based control in a large new data set (n = 168). However, testing the relationship under several different modeling formulations revealed no indication that the two quantities are related

    A Reinforcement Learning Model of Precommitment in Decision Making

    Get PDF
    Addiction and many other disorders are linked to impulsivity, where a suboptimal choice is preferred when it is immediately available. One solution to impulsivity is precommitment: constraining one's future to avoid being offered a suboptimal choice. A form of impulsivity can be measured experimentally by offering a choice between a smaller reward delivered sooner and a larger reward delivered later. Impulsive subjects are more likely to select the smaller-sooner choice; however, when offered an option to precommit, even impulsive subjects can precommit to the larger-later choice. To precommit or not is a decision between two conditions: (A) the original choice (smaller-sooner vs. larger-later), and (B) a new condition with only larger-later available. It has been observed that precommitment appears as a consequence of the preference reversal inherent in non-exponential delay-discounting. Here we show that most models of hyperbolic discounting cannot precommit, but a distributed model of hyperbolic discounting does precommit. Using this model, we find (1) faster discounters may be more or less likely than slow discounters to precommit, depending on the precommitment delay, (2) for a constant smaller-sooner vs. larger-later preference, a higher ratio of larger reward to smaller reward increases the probability of precommitment, and (3) precommitment is highly sensitive to the shape of the discount curve. These predictions imply that manipulations that alter the discount curve, such as diet or context, may qualitatively affect precommitment

    Towards Continual Reinforcement Learning: A Review and Perspectives

    Full text link
    In this article, we aim to provide a literature review of different formulations and approaches to continual reinforcement learning (RL), also known as lifelong or non-stationary RL. We begin by discussing our perspective on why RL is a natural fit for studying continual learning. We then provide a taxonomy of different continual RL formulations and mathematically characterize the non-stationary dynamics of each setting. We go on to discuss evaluation of continual RL agents, providing an overview of benchmarks used in the literature and important metrics for understanding agent performance. Finally, we highlight open problems and challenges in bridging the gap between the current state of continual RL and findings in neuroscience. While still in its early days, the study of continual RL has the promise to develop better incremental reinforcement learners that can function in increasingly realistic applications where non-stationarity plays a vital role. These include applications such as those in the fields of healthcare, education, logistics, and robotics.Comment: Preprint, 52 pages, 8 figure

    Hierarchical models of goal-directed and automatic actions

    Get PDF
    Decision-making processes behind instrumental actions can be divided into two categories: goal-directed actions, and automatic actions. The structure of automatic actions, their interaction with goal-directed actions, and their behavioral and computational properties are the topics of the current thesis. We conceptualize the structure of automatic actions as sequences of actions that form a single response unit and are integrated within goal-directed processes in a hierarchical manner. We represent this hypothesis using the computational framework of reinforcement learning and develop a new normative computational model for the acquisition of action sequences, and their hierarchical interaction with goal-directed processes. We develop a neurally plausible hypothesis for the role of neuromodulator dopamine as a teaching signal for the acquisition of action sequences. We further explore the predictions of the proposed model in a two-stage decision-making task in humans and we show that the proposed model has higher explanatory power than its alternatives. Finally, we translate the two-stage decision-making task to an experimental protocol in rats and show that, similar to humans, rats also use action sequences and engage in hierarchical decision-making. The results provide a new theoretical and experimental paradigm for conceptualizing and measuring the operation and interaction of goal-directed and automatic actions

    Hierarchical models of goal-directed and automatic actions

    Get PDF
    Decision-making processes behind instrumental actions can be divided into two categories: goal-directed actions, and automatic actions. The structure of automatic actions, their interaction with goal-directed actions, and their behavioral and computational properties are the topics of the current thesis. We conceptualize the structure of automatic actions as sequences of actions that form a single response unit and are integrated within goal-directed processes in a hierarchical manner. We represent this hypothesis using the computational framework of reinforcement learning and develop a new normative computational model for the acquisition of action sequences, and their hierarchical interaction with goal-directed processes. We develop a neurally plausible hypothesis for the role of neuromodulator dopamine as a teaching signal for the acquisition of action sequences. We further explore the predictions of the proposed model in a two-stage decision-making task in humans and we show that the proposed model has higher explanatory power than its alternatives. Finally, we translate the two-stage decision-making task to an experimental protocol in rats and show that, similar to humans, rats also use action sequences and engage in hierarchical decision-making. The results provide a new theoretical and experimental paradigm for conceptualizing and measuring the operation and interaction of goal-directed and automatic actions
    corecore