109,369 research outputs found
Automated Sensitivity Analysis for Probabilistic Loops
We present an exact approach to analyze and quantify the sensitivity of
higher moments of probabilistic loops with symbolic parameters, polynomial
arithmetic and potentially uncountable state spaces. Our approach integrates
methods from symbolic computation, probability theory, and static analysis in
order to automatically capture sensitivity information about probabilistic
loops. Sensitivity information allows us to formally establish how value
distributions of probabilistic loop variables influence the functional behavior
of loops, which can in particular be helpful when choosing values of loop
variables in order to ensure efficient/expected computations. Our work uses
algebraic techniques to model higher moments of loop variables via linear
recurrence equations and introduce the notion of sensitivity recurrences. We
show that sensitivity recurrences precisely model loop sensitivities, even in
cases where the moments of loop variables do not satisfy a system of linear
recurrences. As such, we enlarge the class of probabilistic loops for which
sensitivity analysis was so far feasible. We demonstrate the success of our
approach while analyzing the sensitivities of probabilistic loops
Algebraic Reasoning for Probabilistic Action Systems and While-Loops
Back and von Wright have developed algebraic laws for reasoning about loops in the refinement calculus. We extend their work to reasoning about probabilistic loops in the probabilistic refinement calculus. We apply our algebraic reasoning to derive transformation rules for probabilistic action systems and probabilistic while-loops. In particular we focus on developing data refinement rules for these two constructs. Our extension is interesting since some well known transformation rules that are applicable to standard programs are not applicable to probabilistic ones: we identify some of these important differences and we develop alternative rules where possible. In particular, our probabilistic action system and while-loop data refinement rules are new: they differ from the non-probabilistic rules
A Weakest Pre-Expectation Semantics for Mixed-Sign Expectations
We present a weakest-precondition-style calculus for reasoning about the
expected values (pre-expectations) of \emph{mixed-sign unbounded} random
variables after execution of a probabilistic program. The semantics of a
while-loop is well-defined as the limit of iteratively applying a functional to
a zero-element just as in the traditional weakest pre-expectation calculus,
even though a standard least fixed point argument is not applicable in this
context. A striking feature of our semantics is that it is always well-defined,
even if the expected values do not exist. We show that the calculus is sound,
allows for compositional reasoning, and present an invariant-based approach for
reasoning about pre-expectations of loops
Active Sampling-based Binary Verification of Dynamical Systems
Nonlinear, adaptive, or otherwise complex control techniques are increasingly
relied upon to ensure the safety of systems operating in uncertain
environments. However, the nonlinearity of the resulting closed-loop system
complicates verification that the system does in fact satisfy those
requirements at all possible operating conditions. While analytical proof-based
techniques and finite abstractions can be used to provably verify the
closed-loop system's response at different operating conditions, they often
produce conservative approximations due to restrictive assumptions and are
difficult to construct in many applications. In contrast, popular statistical
verification techniques relax the restrictions and instead rely upon
simulations to construct statistical or probabilistic guarantees. This work
presents a data-driven statistical verification procedure that instead
constructs statistical learning models from simulated training data to separate
the set of possible perturbations into "safe" and "unsafe" subsets. Binary
evaluations of closed-loop system requirement satisfaction at various
realizations of the uncertainties are obtained through temporal logic
robustness metrics, which are then used to construct predictive models of
requirement satisfaction over the full set of possible uncertainties. As the
accuracy of these predictive statistical models is inherently coupled to the
quality of the training data, an active learning algorithm selects additional
sample points in order to maximize the expected change in the data-driven model
and thus, indirectly, minimize the prediction error. Various case studies
demonstrate the closed-loop verification procedure and highlight improvements
in prediction error over both existing analytical and statistical verification
techniques.Comment: 23 page
Learning Probabilistic Termination Proofs
We present the first machine learning approach to the termination analysis
of probabilistic programs. Ranking supermartingales (RSMs) prove that
probabilistic programs halt, in expectation, within a finite number of steps.
While previously RSMs were directly synthesised from source code,
our method learns them from sampled execution traces.
We introduce the neural ranking supermartingale:
we let a neural network fit
an RSM over execution traces and then we
verify it over the source code using satisfiability modulo theories (SMT);
if the latter step produces a counterexample, we generate from it
new sample traces and repeat learning in a
counterexample-guided inductive synthesis loop,
until the SMT solver confirms the validity of the RSM.
The result is thus a sound witness of probabilistic termination.
Our learning strategy is agnostic to the source code and its
verification counterpart supports the widest range of probabilistic
single-loop programs that any existing tool can handle to date.
We demonstrate the efficacy of our method over a range of benchmarks that
include linear and polynomial programs with discrete, continuous, state-dependent,
multi-variate, hierarchical distributions, and distributions with
undefined moments
- …