3,670 research outputs found
Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint
The classic objective in a reinforcement learning (RL) problem is to find a
policy that minimizes, in expectation, a long-run objective such as the
infinite-horizon discounted or long-run average cost. In many practical
applications, optimizing the expected value alone is not sufficient, and it may
be necessary to include a risk measure in the optimization process, either as
the objective or as a constraint. Various risk measures have been proposed in
the literature, e.g., mean-variance tradeoff, exponential utility, the
percentile performance, value at risk, conditional value at risk, prospect
theory and its later enhancement, cumulative prospect theory. In this article,
we focus on the combination of risk criteria and reinforcement learning in a
constrained optimization framework, i.e., a setting where the goal to find a
policy that optimizes the usual objective of infinite-horizon
discounted/average cost, while ensuring that an explicit risk constraint is
satisfied. We introduce the risk-constrained RL framework, cover popular risk
measures based on variance, conditional value-at-risk and cumulative prospect
theory, and present a template for a risk-sensitive RL algorithm. We survey
some of our recent work on this topic, covering problems encompassing
discounted cost, average cost, and stochastic shortest path settings, together
with the aforementioned risk measures in a constrained framework. This
non-exhaustive survey is aimed at giving a flavor of the challenges involved in
solving a risk-sensitive RL problem, and outlining some potential future
research directions
Pseudorehearsal in actor-critic agents with neural network function approximation
Catastrophic forgetting has a significant negative impact in reinforcement
learning. The purpose of this study is to investigate how pseudorehearsal can
change performance of an actor-critic agent with neural-network function
approximation. We tested agent in a pole balancing task and compared different
pseudorehearsal approaches. We have found that pseudorehearsal can assist
learning and decrease forgetting
Pseudorehearsal in actor-critic agents with neural network function approximation
Catastrophic forgetting has a significant negative impact in reinforcement
learning. The purpose of this study is to investigate how pseudorehearsal can
change performance of an actor-critic agent with neural-network function
approximation. We tested agent in a pole balancing task and compared different
pseudorehearsal approaches. We have found that pseudorehearsal can assist
learning and decrease forgetting
- …