6 research outputs found
Parameterized MDPs and Reinforcement Learning Problems -- A Maximum Entropy Principle Based Framework
We present a framework to address a class of sequential decision making
problems. Our framework features learning the optimal control policy with
robustness to noisy data, determining the unknown state and action parameters,
and performing sensitivity analysis with respect to problem parameters. We
consider two broad categories of sequential decision making problems modelled
as infinite horizon Markov Decision Processes (MDPs) with (and without) an
absorbing state. The central idea underlying our framework is to quantify
exploration in terms of the Shannon Entropy of the trajectories under the MDP
and determine the stochastic policy that maximizes it while guaranteeing a low
value of the expected cost along a trajectory. This resulting policy enhances
the quality of exploration early on in the learning process, and consequently
allows faster convergence rates and robust solutions even in the presence of
noisy data as demonstrated in our comparisons to popular algorithms such as
Q-learning, Double Q-learning and entropy regularized Soft Q-learning. The
framework extends to the class of parameterized MDP and RL problems, where
states and actions are parameter dependent, and the objective is to determine
the optimal parameters along with the corresponding optimal policy. Here, the
associated cost function can possibly be non-convex with multiple poor local
minima. Simulation results applied to a 5G small cell network problem
demonstrate successful determination of communication routes and the small cell
locations. We also obtain sensitivity measures to problem parameters and
robustness to noisy environment data.Comment: 17 pages, 7 figure