1 research outputs found
Formal Policy Synthesis for Continuous-Space Systems via Reinforcement Learning
This paper studies satisfaction of temporal properties on unknown stochastic
processes that have continuous state spaces. We show how reinforcement learning
(RL) can be applied for computing policies that are finite-memory and
deterministic using only the paths of the stochastic process. We address
properties expressed in linear temporal logic (LTL) and use their automaton
representation to give a path-dependent reward function maximised via the RL
algorithm. We develop the required assumptions and theories for the convergence
of the learned policy to the optimal policy in the continuous state space. To
improve the performance of the learning on the constructed sparse reward
function, we propose a sequential learning procedure based on a sequence of
labelling functions obtained from the positive normal form of the LTL
specification. We use this procedure to guide the RL algorithm towards a policy
that converges to an optimal policy under suitable assumptions on the process.
We demonstrate the approach on a 4-dim cart-pole system and 6-dim boat driving
problem.Comment: This is the extended version of the paper accepted in the 16th
International Conference on integrated Formal Methods (iFM