4,721 research outputs found
Learning policies for Markov decision processes from data
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using `1-regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order O( √ ), it suffices to use training sample size on the order of Ω(logn · poly(1/ )), where n is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example
Learning policies for Markov decision processes from data
We consider the problem of learning a policy for a Markov decision process consistent with data captured on the state-actions pairs followed by the policy. We assume that the policy belongs to a class of parameterized policies which are defined using features associated with the state-action pairs. The features are known a priori, however, only an unknown subset of them could be relevant. The policy parameters that correspond to an observed target policy are recovered using `1-regularized logistic regression that best fits the observed state-action samples. We establish bounds on the difference between the average reward of the estimated and the original policy (regret) in terms of the generalization error and the ergodic coefficient of the underlying Markov chain. To that end, we combine sample complexity theory and sensitivity analysis of the stationary distribution of Markov chains. Our analysis suggests that to achieve regret within order O( √ ), it suffices to use training sample size on the order of Ω(logn · poly(1/ )), where n is the number of the features. We demonstrate the effectiveness of our method on a synthetic robot navigation example
Optimizing the CVaR via Sampling
Conditional Value at Risk (CVaR) is a prominent risk measure that is being
used extensively in various domains. We develop a new formula for the gradient
of the CVaR in the form of a conditional expectation. Based on this formula, we
propose a novel sampling-based estimator for the CVaR gradient, in the spirit
of the likelihood-ratio method. We analyze the bias of the estimator, and prove
the convergence of a corresponding stochastic gradient descent algorithm to a
local CVaR optimum. Our method allows to consider CVaR optimization in new
domains. As an example, we consider a reinforcement learning application, and
learn a risk-sensitive controller for the game of Tetris.Comment: To appear in AAAI 201
Solving Factored MDPs with Hybrid State and Action Variables
Efficient representations and solutions for large decision problems with
continuous and discrete variables are among the most important challenges faced
by the designers of automated decision support systems. In this paper, we
describe a novel hybrid factored Markov decision process (MDP) model that
allows for a compact representation of these problems, and a new hybrid
approximate linear programming (HALP) framework that permits their efficient
solutions. The central idea of HALP is to approximate the optimal value
function by a linear combination of basis functions and optimize its weights by
linear programming. We analyze both theoretical and computational aspects of
this approach, and demonstrate its scale-up potential on several hybrid
optimization problems
Policy Gradients for CVaR-Constrained MDPs
We study a risk-constrained version of the stochastic shortest path (SSP)
problem, where the risk measure considered is Conditional Value-at-Risk (CVaR).
We propose two algorithms that obtain a locally risk-optimal policy by
employing four tools: stochastic approximation, mini batches, policy gradients
and importance sampling. Both the algorithms incorporate a CVaR estimation
procedure, along the lines of Bardou et al. [2009], which in turn is based on
Rockafellar-Uryasev's representation for CVaR and utilize the likelihood ratio
principle for estimating the gradient of the sum of one cost function
(objective of the SSP) and the gradient of the CVaR of the sum of another cost
function (in the constraint of SSP). The algorithms differ in the manner in
which they approximate the CVaR estimates/necessary gradients - the first
algorithm uses stochastic approximation, while the second employ mini-batches
in the spirit of Monte Carlo methods. We establish asymptotic convergence of
both the algorithms. Further, since estimating CVaR is related to rare-event
simulation, we incorporate an importance sampling based variance reduction
scheme into our proposed algorithms
Inverse Reinforcement Learning in Large State Spaces via Function Approximation
This paper introduces a new method for inverse reinforcement learning in
large-scale and high-dimensional state spaces. To avoid solving the
computationally expensive reinforcement learning problems in reward learning,
we propose a function approximation method to ensure that the Bellman
Optimality Equation always holds, and then estimate a function to maximize the
likelihood of the observed motion. The time complexity of the proposed method
is linearly proportional to the cardinality of the action set, thus it can
handle large state spaces efficiently. We test the proposed method in a
simulated environment, and show that it is more accurate than existing methods
and significantly better in scalability. We also show that the proposed method
can extend many existing methods to high-dimensional state spaces. We then
apply the method to evaluating the effect of rehabilitative stimulations on
patients with spinal cord injuries based on the observed patient motions.Comment: Experiment update
- …