142 research outputs found
Provably Efficient Model-Free Algorithm for MDPs with Peak Constraints
In the optimization of dynamic systems, the variables typically have
constraints. Such problems can be modeled as a Constrained Markov Decision
Process (CMDP). This paper considers the peak constraints, where the agent
chooses the policy to maximize the long-term average reward as well as
satisfies the constraints at each time. We propose a model-free algorithm that
converts CMDP problem to an unconstrained problem and a Q-learning based
approach is used. We extend the concept of probably approximately correct (PAC)
to define a criterion of -optimal policy. The proposed algorithm is
proved to achieve an -optimal policy with probability at least
when the episode , where and
is the number of states and actions, respectively, is the number of
steps per episode, is the number of constraint functions, and
. We note that this is the first result on PAC kind
of analysis for CMDP with peak constraints, where the transition probabilities
are not known apriori. We demonstrate the proposed algorithm on an energy
harvesting problem where it outperforms state-of-the-art and performs close to
the theoretical upper bound of the studied optimization problem
- …