110,148 research outputs found
Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint
The classic objective in a reinforcement learning (RL) problem is to find a
policy that minimizes, in expectation, a long-run objective such as the
infinite-horizon discounted or long-run average cost. In many practical
applications, optimizing the expected value alone is not sufficient, and it may
be necessary to include a risk measure in the optimization process, either as
the objective or as a constraint. Various risk measures have been proposed in
the literature, e.g., mean-variance tradeoff, exponential utility, the
percentile performance, value at risk, conditional value at risk, prospect
theory and its later enhancement, cumulative prospect theory. In this article,
we focus on the combination of risk criteria and reinforcement learning in a
constrained optimization framework, i.e., a setting where the goal to find a
policy that optimizes the usual objective of infinite-horizon
discounted/average cost, while ensuring that an explicit risk constraint is
satisfied. We introduce the risk-constrained RL framework, cover popular risk
measures based on variance, conditional value-at-risk and cumulative prospect
theory, and present a template for a risk-sensitive RL algorithm. We survey
some of our recent work on this topic, covering problems encompassing
discounted cost, average cost, and stochastic shortest path settings, together
with the aforementioned risk measures in a constrained framework. This
non-exhaustive survey is aimed at giving a flavor of the challenges involved in
solving a risk-sensitive RL problem, and outlining some potential future
research directions
Deep Reinforcement Learning for Gas Trading
Deep Reinforcement Learning (Deep RL) has been explored for a number of
applications in finance and stock trading. In this paper, we present a
practical implementation of Deep RL for trading natural gas futures contracts.
The Sharpe Ratio obtained exceeds benchmarks given by trend following and mean
reversion strategies as well as results reported in literature. Moreover, we
propose a simple but effective ensemble learning scheme for trading, which
significantly improves performance through enhanced model stability and
robustness as well as lower turnover and hence lower transaction cost. We
discuss the resulting Deep RL strategy in terms of model explainability,
trading frequency and risk measures
Psychological factors affecting equine performance
For optimal individual performance within any equestrian discipline horses must be in peak physical condition and have the correct psychological state. This review discusses the psychological factors that affect the performance of the horse and, in turn, identifies areas within the competition horse industry where current behavioral research and established behavioral modification techniques could be applied to further enhance the performance of animals. In particular, the role of affective processes underpinning temperament, mood and emotional reaction in determining discipline-specific performance is discussed. A comparison is then made between the training and the competition environment and the review completes with a discussion on how behavioral modification techniques and general husbandry can be used advantageously from a performance perspective
- β¦