10,866 research outputs found
No-Regret Exploration in Goal-Oriented Reinforcement Learning
Many popular reinforcement learning problems (e.g., navigation in a maze,
some Atari games, mountain car) are instances of the episodic setting under its
stochastic shortest path (SSP) formulation, where an agent has to achieve a
goal state while minimizing the cumulative cost. Despite the popularity of this
setting, the exploration-exploitation dilemma has been sparsely studied in
general SSP problems, with most of the theoretical literature focusing on
different problems (i.e., fixed-horizon and infinite-horizon) or making the
restrictive loop-free SSP assumption (i.e., no state can be visited twice
during an episode). In this paper, we study the general SSP problem with no
assumption on its dynamics (some policies may actually never reach the goal).
We introduce UC-SSP, the first no-regret algorithm in this setting, and prove a
regret bound scaling as after episodes for any unknown SSP with states, actions,
positive costs and SSP-diameter , defined as the smallest expected hitting
time from any starting state to the goal. We achieve this result by crafting a
novel stopping rule, such that UC-SSP may interrupt the current policy if it is
taking too long to achieve the goal and switch to alternative policies that are
designed to rapidly terminate the episode
No-regret exploration in goal-oriented reinforcement learning
International audienceMany popular reinforcement learning problems (e.g., navigation in a maze, some Atari games, mountain car) are instances of the episodic setting under its stochastic shortest path (SSP) formulation, where an agent has to achieve a goal state while minimizing the cumulative cost. Despite the popularity of this setting, the explorationexploitation dilemma has been sparsely studied in general SSP problems, with most of the theoretical literature focusing on different problems (i.e., finite-horizon and infinite-horizon) or making the restrictive loop-free SSP assumption (i.e., no state can be visited twice during an episode). In this paper, we study the general SSP problem with no assumption on its dynamics (some policies may actually never reach the goal). We introduce UC-SSP, the first no-regret algorithm in this setting, and prove a regret bound scaling as O(DS √ ADK) after K episodes for any unknown SSP with S states, A actions, positive costs and SSP-diameter D, defined as the smallest expected hitting time from any starting state to the goal. We achieve this result by crafting a novel stopping rule, such that UC-SSP may interrupt the current policy if it is taking too long to achieve the goal and switch to alternative policies that are designed to rapidly terminate the episode
Gambler's Ruin Bandit Problem
In this paper, we propose a new multi-armed bandit problem called the
Gambler's Ruin Bandit Problem (GRBP). In the GRBP, the learner proceeds in a
sequence of rounds, where each round is a Markov Decision Process (MDP) with
two actions (arms): a continuation action that moves the learner randomly over
the state space around the current state; and a terminal action that moves the
learner directly into one of the two terminal states (goal and dead-end state).
The current round ends when a terminal state is reached, and the learner incurs
a positive reward only when the goal state is reached. The objective of the
learner is to maximize its long-term reward (expected number of times the goal
state is reached), without having any prior knowledge on the state transition
probabilities. We first prove a result on the form of the optimal policy for
the GRBP. Then, we define the regret of the learner with respect to an
omnipotent oracle, which acts optimally in each round, and prove that it
increases logarithmically over rounds. We also identify a condition under which
the learner's regret is bounded. A potential application of the GRBP is optimal
medical treatment assignment, in which the continuation action corresponds to a
conservative treatment and the terminal action corresponds to a risky treatment
such as surgery
Dynamic Weights in Multi-Objective Deep Reinforcement Learning
Many real-world decision problems are characterized by multiple conflicting
objectives which must be balanced based on their relative importance. In the
dynamic weights setting the relative importance changes over time and
specialized algorithms that deal with such change, such as a tabular
Reinforcement Learning (RL) algorithm by Natarajan and Tadepalli (2005), are
required. However, this earlier work is not feasible for RL settings that
necessitate the use of function approximators. We generalize across weight
changes and high-dimensional inputs by proposing a multi-objective Q-network
whose outputs are conditioned on the relative importance of objectives and we
introduce Diverse Experience Replay (DER) to counter the inherent
non-stationarity of the Dynamic Weights setting. We perform an extensive
experimental evaluation and compare our methods to adapted algorithms from Deep
Multi-Task/Multi-Objective Reinforcement Learning and show that our proposed
network in combination with DER dominates these adapted algorithms across
weight change scenarios and problem domains
- …