Synthesis from linear temporal logic (LTL) specifications provides assured
controllers for systems operating in stochastic and potentially adversarial
environments. Automatic synthesis tools, however, require a model of the
environment to construct controllers. In this work, we introduce a model-free
reinforcement learning (RL) approach to derive controllers from given LTL
specifications even when the environment is completely unknown. We model the
problem as a stochastic game (SG) between the controller and the adversarial
environment; we then learn optimal control strategies that maximize the
probability of satisfying the LTL specifications against the worst-case
environment behavior. We first construct a product game using the deterministic
parity automaton (DPA) translated from the given LTL specification. By deriving
distinct rewards and discount factors from the acceptance condition of the DPA,
we reduce the maximization of the worst-case probability of satisfying the LTL
specification into the maximization of a discounted reward objective in the
product game; this enables the use of model-free RL algorithms to learn an
optimal controller strategy. To deal with the common scalability problems when
the number of sets defining the acceptance condition of the DPA (usually
referred as colors), is large, we propose a lazy color generation method where
distinct rewards and discount factors are utilized only when needed, and an
approximate method where the controller eventually focuses on only one color.
In several case studies, we show that our approach is scalable to a wide range
of LTL formulas, significantly outperforming existing methods for learning
controllers from LTL specifications in SGs