We study the problem of finite-horizon probabilistic invariance for
discrete-time Markov processes over general (uncountable) state spaces. We
compute discrete-time, finite-state Markov chains as formal abstractions of
general Markov processes. Our abstraction differs from existing approaches in
two ways. First, we exploit the structure of the underlying Markov process to
compute the abstraction separately for each dimension. Second, we employ
dynamic Bayesian networks (DBN) as compact representations of the abstraction.
In contrast, existing approaches represent and store the (exponentially large)
Markov chain explicitly, which leads to heavy memory requirements limiting the
application to models of dimension less than half, according to our
experiments. We show how to construct a DBN abstraction of a Markov process
satisfying an independence assumption on the driving process noise. We compute
a guaranteed bound on the error in the abstraction w.r.t.\ the probabilistic
invariance property; the dimension-dependent abstraction makes the error bounds
more precise than existing approaches. Additionally, we show how factor graphs
and the sum-product algorithm for DBNs can be used to solve the finite-horizon
probabilistic invariance problem. Together, DBN-based representations and
algorithms can be significantly more efficient than explicit representations of
Markov chains for abstracting and model checking structured Markov processes.Comment: Accepted in 26th Conference on Concurrency Theor