Reward engineering is an important aspect of reinforcement learning. Whether
or not the user's intentions can be correctly encapsulated in the reward
function can significantly impact the learning outcome. Current methods rely on
manually crafted reward functions that often require parameter tuning to obtain
the desired behavior. This operation can be expensive when exploration requires
systems to interact with the physical world. In this paper, we explore the use
of temporal logic (TL) to specify tasks in reinforcement learning. TL formula
can be translated to a real-valued function that measures its level of
satisfaction against a trajectory. We take advantage of this function and
propose temporal logic policy search (TLPS), a model-free learning technique
that finds a policy that satisfies the TL specification. A set of simulated
experiments are conducted to evaluate the proposed approach