2,324 research outputs found
Robust satisfaction of temporal logic specifications via reinforcement learning
We consider the problem of steering a system with unknown, stochastic dynamics to satisfy a rich, temporally-layered task given as a signal temporal logic formula. We represent the system as a finite-memory Markov decision process with unknown transition probabilities and whose states are built from a partition of the state space. We present provably convergent reinforcement learning algorithms to maximize the probability of satisfying a given specification and to maximize the average expected robustness, i.e. a measure of how strongly the formula is satisfied. Robustness allows us to quantify progress towards satisfying a given specification. We demonstrate via a pair of robot navigation simulation case studies that, due to the quantification of progress towards satisfaction, reinforcement learning with robustness maximization performs better than probability maximization in terms of both probability of satisfaction and expected robustness with a low number of training examples
Robust Satisfaction of Temporal Logic Specifications via Reinforcement Learning
We consider the problem of steering a system with unknown, stochastic
dynamics to satisfy a rich, temporally layered task given as a signal temporal
logic formula. We represent the system as a Markov decision process in which
the states are built from a partition of the state space and the transition
probabilities are unknown. We present provably convergent reinforcement
learning algorithms to maximize the probability of satisfying a given formula
and to maximize the average expected robustness, i.e., a measure of how
strongly the formula is satisfied. We demonstrate via a pair of robot
navigation simulation case studies that reinforcement learning with robustness
maximization performs better than probability maximization in terms of both
probability of satisfaction and expected robustness.Comment: 8 pages, 4 figure
Certified Reinforcement Learning with Logic Guidance
This paper proposes the first model-free Reinforcement Learning (RL)
framework to synthesise policies for unknown, and continuous-state Markov
Decision Processes (MDPs), such that a given linear temporal property is
satisfied. We convert the given property into a Limit Deterministic Buchi
Automaton (LDBA), namely a finite-state machine expressing the property.
Exploiting the structure of the LDBA, we shape a synchronous reward function
on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces
that probabilistically satisfy the linear temporal property. This probability
(certificate) is also calculated in parallel with policy learning when the
state space of the MDP is finite: as such, the RL algorithm produces a policy
that is certified with respect to the property. Under the assumption of finite
state space, theoretical guarantees are provided on the convergence of the RL
algorithm to an optimal policy, maximising the above probability. We also show
that our method produces ''best available'' control policies when the logical
property cannot be satisfied. In the general case of a continuous state space,
we propose a neural network architecture for RL and we empirically show that
the algorithm finds satisfying policies, if there exist such policies. The
performance of the proposed framework is evaluated via a set of numerical
examples and benchmarks, where we observe an improvement of one order of
magnitude in the number of iterations required for the policy synthesis,
compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782
Probabilistic Plan Synthesis for Coupled Multi-Agent Systems
This paper presents a fully automated procedure for controller synthesis for
multi-agent systems under the presence of uncertainties. We model the motion of
each of the agents in the environment as a Markov Decision Process (MDP)
and we assign to each agent one individual high-level formula given in
Probabilistic Computational Tree Logic (PCTL). Each agent may need to
collaborate with other agents in order to achieve a task. The collaboration is
imposed by sharing actions between the agents. We aim to design local control
policies such that each agent satisfies its individual PCTL formula. The
proposed algorithm builds on clustering the agents, MDP products construction
and controller policies design. We show that our approach has better
computational complexity than the centralized case, which traditionally suffers
from very high computational demands.Comment: IFAC WC 2017, Toulouse, Franc
From Uncertainty Data to Robust Policies for Temporal Logic Planning
We consider the problem of synthesizing robust disturbance feedback policies
for systems performing complex tasks. We formulate the tasks as linear temporal
logic specifications and encode them into an optimization framework via
mixed-integer constraints. Both the system dynamics and the specifications are
known but affected by uncertainty. The distribution of the uncertainty is
unknown, however realizations can be obtained. We introduce a data-driven
approach where the constraints are fulfilled for a set of realizations and
provide probabilistic generalization guarantees as a function of the number of
considered realizations. We use separate chance constraints for the
satisfaction of the specification and operational constraints. This allows us
to quantify their violation probabilities independently. We compute disturbance
feedback policies as solutions of mixed-integer linear or quadratic
optimization problems. By using feedback we can exploit information of past
realizations and provide feasibility for a wider range of situations compared
to static input sequences. We demonstrate the proposed method on two robust
motion-planning case studies for autonomous driving
A Learning Based Approach to Control Synthesis of Markov Decision Processes for Linear Temporal Logic Specifications
We propose to synthesize a control policy for a Markov decision process (MDP)
such that the resulting traces of the MDP satisfy a linear temporal logic (LTL)
property. We construct a product MDP that incorporates a deterministic Rabin
automaton generated from the desired LTL property. The reward function of the
product MDP is defined from the acceptance condition of the Rabin automaton.
This construction allows us to apply techniques from learning theory to the
problem of synthesis for LTL specifications even when the transition
probabilities are not known a priori. We prove that our method is guaranteed to
find a controller that satisfies the LTL property with probability one if such
a policy exists, and we suggest empirically with a case study in traffic
control that our method produces reasonable control strategies even when the
LTL property cannot be satisfied with probability one
- …