2,128 research outputs found
Inverse Reinforcement Learning in Large State Spaces via Function Approximation
This paper introduces a new method for inverse reinforcement learning in
large-scale and high-dimensional state spaces. To avoid solving the
computationally expensive reinforcement learning problems in reward learning,
we propose a function approximation method to ensure that the Bellman
Optimality Equation always holds, and then estimate a function to maximize the
likelihood of the observed motion. The time complexity of the proposed method
is linearly proportional to the cardinality of the action set, thus it can
handle large state spaces efficiently. We test the proposed method in a
simulated environment, and show that it is more accurate than existing methods
and significantly better in scalability. We also show that the proposed method
can extend many existing methods to high-dimensional state spaces. We then
apply the method to evaluating the effect of rehabilitative stimulations on
patients with spinal cord injuries based on the observed patient motions.Comment: Experiment update
A Theory of Regularized Markov Decision Processes
Many recent successful (deep) reinforcement learning algorithms make use of
regularization, generally based on entropy or Kullback-Leibler divergence. We
propose a general theory of regularized Markov Decision Processes that
generalizes these approaches in two directions: we consider a larger class of
regularizers, and we consider the general modified policy iteration approach,
encompassing both policy iteration and value iteration. The core building
blocks of this theory are a notion of regularized Bellman operator and the
Legendre-Fenchel transform, a classical tool of convex optimization. This
approach allows for error propagation analyses of general algorithmic schemes
of which (possibly variants of) classical algorithms such as Trust Region
Policy Optimization, Soft Q-learning, Stochastic Actor Critic or Dynamic Policy
Programming are special cases. This also draws connections to proximal convex
optimization, especially to Mirror Descent.Comment: ICML 201
Difference of Convex Functions Programming Applied to Control with Expert Data
This paper reports applications of Difference of Convex functions (DC)
programming to Learning from Demonstrations (LfD) and Reinforcement Learning
(RL) with expert data. This is made possible because the norm of the Optimal
Bellman Residual (OBR), which is at the heart of many RL and LfD algorithms, is
DC. Improvement in performance is demonstrated on two specific algorithms,
namely Reward-regularized Classification for Apprenticeship Learning (RCAL) and
Reinforcement Learning with Expert Demonstrations (RLED), through experiments
on generic Markov Decision Processes (MDP), called Garnets
Meta Inverse Reinforcement Learning via Maximum Reward Sharing for Human Motion Analysis
This work handles the inverse reinforcement learning (IRL) problem where only
a small number of demonstrations are available from a demonstrator for each
high-dimensional task, insufficient to estimate an accurate reward function.
Observing that each demonstrator has an inherent reward for each state and the
task-specific behaviors mainly depend on a small number of key states, we
propose a meta IRL algorithm that first models the reward function for each
task as a distribution conditioned on a baseline reward function shared by all
tasks and dependent only on the demonstrator, and then finds the most likely
reward function in the distribution that explains the task-specific behaviors.
We test the method in a simulated environment on path planning tasks with
limited demonstrations, and show that the accuracy of the learned reward
function is significantly improved. We also apply the method to analyze the
motion of a patient under rehabilitation.Comment: arXiv admin note: text overlap with arXiv:1707.0939
- …