2,654 research outputs found
Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions
The goal of this work is to formally abstract a Markov process evolving in
discrete time over a general state space as a finite-state Markov chain, with
the objective of precisely approximating its state probability distribution in
time, which allows for its approximate, faster computation by that of the
Markov chain. The approach is based on formal abstractions and employs an
arbitrary finite partition of the state space of the Markov process, and the
computation of average transition probabilities between partition sets. The
abstraction technique is formal, in that it comes with guarantees on the
introduced approximation that depend on the diameters of the partitions: as
such, they can be tuned at will. Further in the case of Markov processes with
unbounded state spaces, a procedure for precisely truncating the state space
within a compact set is provided, together with an error bound that depends on
the asymptotic properties of the transition kernel of the original process. The
overall abstraction algorithm, which practically hinges on piecewise constant
approximations of the density functions of the Markov process, is extended to
higher-order function approximations: these can lead to improved error bounds
and associated lower computational requirements. The approach is practically
tested to compute probabilistic invariance of the Markov process under study,
and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc
Symbolic Controller Synthesis for B\"uchi Specifications on Stochastic Systems
We consider the policy synthesis problem for continuous-state controlled
Markov processes evolving in discrete time, when the specification is given as
a B\"uchi condition (visit a set of states infinitely often). We decompose
computation of the maximal probability of satisfying the B\"uchi condition into
two steps. The first step is to compute the maximal qualitative winning set,
from where the B\"uchi condition can be enforced with probability one. The
second step is to find the maximal probability of reaching the already computed
qualitative winning set. In contrast with finite-state models, we show that
such a computation only gives a lower bound on the maximal probability where
the gap can be non-zero.
In this paper we focus on approximating the qualitative winning set, while
pointing out that the existing approaches for unbounded reachability
computation can solve the second step. We provide an abstraction-based
technique to approximate the qualitative winning set by simultaneously using an
over- and under-approximation of the probabilistic transition relation. Since
we are interested in qualitative properties, the abstraction is
non-probabilistic; instead, the probabilistic transitions are assumed to be
under the control of a (fair) adversary. Thus, we reduce the original policy
synthesis problem to a B\"uchi game under a fairness assumption and
characterize upper and lower bounds on winning sets as nested fixed point
expressions in the -calculus. This characterization immediately provides a
symbolic algorithm scheme. Further, a winning strategy computed on the abstract
game can be refined to a policy on the controlled Markov process.
We describe a concrete abstraction procedure and demonstrate our algorithm on
two case studies
Sampling-based Approximations with Quantitative Performance for the Probabilistic Reach-Avoid Problem over General Markov Processes
This article deals with stochastic processes endowed with the Markov
(memoryless) property and evolving over general (uncountable) state spaces. The
models further depend on a non-deterministic quantity in the form of a control
input, which can be selected to affect the probabilistic dynamics. We address
the computation of maximal reach-avoid specifications, together with the
synthesis of the corresponding optimal controllers. The reach-avoid
specification deals with assessing the likelihood that any finite-horizon
trajectory of the model enters a given goal set, while avoiding a given set of
undesired states. This article newly provides an approximate computational
scheme for the reach-avoid specification based on the Fitted Value Iteration
algorithm, which hinges on random sample extractions, and gives a-priori
computable formal probabilistic bounds on the error made by the approximation
algorithm: as such, the output of the numerical scheme is quantitatively
assessed and thus meaningful for safety-critical applications. Furthermore, we
provide tighter probabilistic error bounds that are sample-based. The overall
computational scheme is put in relationship with alternative approximation
algorithms in the literature, and finally its performance is practically
assessed over a benchmark case study
Temporal Logic Control of POMDPs via Label-based Stochastic Simulation Relations
The synthesis of controllers guaranteeing linear temporal logic specifications on partially observable Markov decision processes (POMDP) via their belief models causes computational issues due to the continuous spaces. In this work, we construct a finite-state abstraction on which a control policy is synthesized and refined back to the original belief model. We introduce a new notion of label-based approximate stochastic simulation to quantify the deviation between belief models. We develop a robust synthesis methodology that yields a lower bound on the satisfaction probability, by compensating for deviations a priori, and that utilizes a less conservative control refinement
- …