1,836 research outputs found
Markov Decision Processes with Risk-Sensitive Criteria: An Overview
The paper provides an overview of the theory and applications of
risk-sensitive Markov decision processes. The term 'risk-sensitive' refers here
to the use of the Optimized Certainty Equivalent as a means to measure
expectation and risk. This comprises the well-known entropic risk measure and
Conditional Value-at-Risk. We restrict our considerations to stationary
problems with an infinite time horizon. Conditions are given under which
optimal policies exist and solution procedures are explained. We present both
the theory when the Optimized Certainty Equivalent is applied recursively as
well as the case where it is applied to the cumulated reward. Discounted as
well as non-discounted models are reviewe
On overfitting and asymptotic bias in batch reinforcement learning with partial observability
This paper provides an analysis of the tradeoff between asymptotic bias
(suboptimality with unlimited data) and overfitting (additional suboptimality
due to limited data) in the context of reinforcement learning with partial
observability. Our theoretical analysis formally characterizes that while
potentially increasing the asymptotic bias, a smaller state representation
decreases the risk of overfitting. This analysis relies on expressing the
quality of a state representation by bounding L1 error terms of the associated
belief states. Theoretical results are empirically illustrated when the state
representation is a truncated history of observations, both on synthetic POMDPs
and on a large-scale POMDP in the context of smartgrids, with real-world data.
Finally, similarly to known results in the fully observable setting, we also
briefly discuss and empirically illustrate how using function approximators and
adapting the discount factor may enhance the tradeoff between asymptotic bias
and overfitting in the partially observable context.Comment: Accepted at the Journal of Artificial Intelligence Research (JAIR) -
31 page
Energy Efficient Execution of POMDP Policies
Recent advances in planning techniques for partially observable Markov decision processes have focused on online search techniques and offline point-based value iteration. While these techniques allow practitioners to obtain policies for fairly large problems, they assume that a non-negligible amount of computation can be done between each decision point. In contrast, the recent proliferation of mobile and embedded devices has lead to a surge of applications that could benefit from state of the art planning techniques if they can operate under severe constraints on computational resources. To that effect, we describe two techniques to compile policies into controllers that can be executed by a mere table lookup at each decision point. The first approach compiles policies induced by a set of alpha vectors (such as those obtained by point-based techniques) into approximately equivalent controllers, while the second approach performs a simulation to compile arbitrary policies into approximately equivalent controllers. We also describe an approach to compress controllers by removing redundant and dominated nodes, often yielding smaller and yet better controllers. Further compression and higher value can sometimes be obtained by considering stochastic controllers. The compilation and compression techniques are demonstrated on benchmark problems as well as a mobile application to help persons with Alzheimer's to way-find. The battery consumption of several POMDP policies is compared against finite-state controllers learned using methods introduced in this paper. Experiments performed on the Nexus 4 phone show that finite-state controllers are the least battery consuming POMDP policies
Stability for Receding-horizon Stochastic Model Predictive Control
A stochastic model predictive control (SMPC) approach is presented for
discrete-time linear systems with arbitrary time-invariant probabilistic
uncertainties and additive Gaussian process noise. Closed-loop stability of the
SMPC approach is established by appropriate selection of the cost function.
Polynomial chaos is used for uncertainty propagation through system dynamics.
The performance of the SMPC approach is demonstrated using the Van de Vusse
reactions.Comment: American Control Conference (ACC) 201
- …