210,633 research outputs found
Recommended from our members
Modelling Emotion Based Reward Valuation with Computational Reinforcement Learning
We show that computational reinforcement learning can model human decision making in the Iowa Gambling Task (IGT). The IGT is a card game, which tests decision making under uncertainty. In our experiments, we found that modulating learning rate decay in Q-learning, enables the approximation of both the behaviour of normal subjects and those who are emotionally impaired by ventromedial prefrontal lesions. Outcomes observed in impaired subjects are modeled by high learning rate decay, while low learning rate decay replicates healthy subjects under otherwise identical conditions. The ventromedial prefrontal cortex has been associated with emotion based reward valuation, and, the value function in reinforcement learning provides an analogous assessment mechanism. Thus reinforcement learning can provide a good model for the role of emotional reward as a modulator of the learning rate
A reinforcement learning based decision support system in textile manufacturing process
This paper introduced a reinforcement learning based decision support system
in textile manufacturing process. A solution optimization problem of color
fading ozonation is discussed and set up as a Markov Decision Process (MDP) in
terms of tuple {S, A, P, R}. Q-learning is used to train an agent in the
interaction with the setup environment by accumulating the reward R. According
to the application result, it is found that the proposed MDP model has well
expressed the optimization problem of textile manufacturing process discussed
in this paper, therefore the use of reinforcement learning to support decision
making in this sector is conducted and proven that is applicable with promising
prospects
Transcranial Direct Corrent stimulation (tDCS) of the anterior prefrontal cortex (aPFC) modulates reinforcement learning and decision-making under uncertainty: A doubleblind crossover study
Reinforcement learning refers to the ability to acquire
information from the outcomes of prior choices (i.e.
positive and negative) in order to make predictions on the
effect of future decision and adapt the behaviour basing on
past experiences. The anterior prefrontal cortex (aPFC) is considered
to play a key role in the representation of event value,
reinforcement learning and decision-making. However, a
causal evidence of the involvement of this area in these processes
has not been provided yet. The aim of the study was to
test the role of the orbitofrontal cortex in feedback processing,
reinforcement learning and decision-making under uncertainly.
Eighteen healthy individuals underwent three sessions of
tDCS over the prefrontal pole (anodal, cathodal, sham) during
a probabilistic learning (PL) task. In the PL task, participants
were invited to learn the covert probabilistic stimulusoutcome
association from positive and negative feedbacks in
order to choose the best option. Afterwards, a probabilistic
selection (PS) task was delivered to assess decisions based
on the stimulus-reward associations acquired in the PL task.
During cathodal tDCS, accuracy in the PL task was reduced
and participants were less prone to maintain their choice after
positive feedback or to change it after a negative one (i.e., winstay
and lose-shift behavior). In addition, anodal tDCS affected
the subsequent PS task by reducing the ability to choose the
best alternative during hard probabilistic decisions. In conclusion,
the present study suggests a causal role of aPFC in feedback
trial-by-trial behavioral adaptation and decision-making
under uncertainty
Reinforcement Learning: A Survey
This paper surveys the field of reinforcement learning from a
computer-science perspective. It is written to be accessible to researchers
familiar with machine learning. Both the historical basis of the field and a
broad selection of current work are summarized. Reinforcement learning is the
problem faced by an agent that learns behavior through trial-and-error
interactions with a dynamic environment. The work described here has a
resemblance to work in psychology, but differs considerably in the details and
in the use of the word ``reinforcement.'' The paper discusses central issues of
reinforcement learning, including trading off exploration and exploitation,
establishing the foundations of the field via Markov decision theory, learning
from delayed reinforcement, constructing empirical models to accelerate
learning, making use of generalization and hierarchy, and coping with hidden
state. It concludes with a survey of some implemented systems and an assessment
of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file
- …