We consider synthesis of control policies that maximize the probability of
satisfying given temporal logic specifications in unknown, stochastic
environments. We model the interaction between the system and its environment
as a Markov decision process (MDP) with initially unknown transition
probabilities. The solution we develop builds on the so-called model-based
probably approximately correct Markov decision process (PAC-MDP) methodology.
The algorithm attains an ε-approximately optimal policy with
probability 1−δ using samples (i.e. observations), time and space that
grow polynomially with the size of the MDP, the size of the automaton
expressing the temporal logic specification, ε1​,
δ1​ and a finite time horizon. In this approach, the system
maintains a model of the initially unknown MDP, and constructs a product MDP
based on its learned model and the specification automaton that expresses the
temporal logic constraints. During execution, the policy is iteratively updated
using observation of the transitions taken by the system. The iteration
terminates in finitely many steps. With high probability, the resulting policy
is such that, for any state, the difference between the probability of
satisfying the specification under this policy and the optimal one is within a
predefined bound.Comment: 9 pages, 5 figures, Accepted by 2014 Robotics: Science and Systems
(RSS