2,105 research outputs found
Incremental Static Analysis of Probabilistic Programs
Probabilistic models are used successfully in a wide range of fields, including machine
learning, data mining, pattern recognition, and robotics. Probabilistic programming
languages are designed to express probabilistic models in high-level programming
languages and to conduct automatic inference to compute posterior distributions.
A key obstacle to the wider adoption of probabilistic programming languages
in practice is that general-purpose efficient inference is computationally difficult.
This thesis aims to improve the efficiency of inference through incremental analysis,
while preserving precision when a probabilistic program undergoes small changes.
For small changes to probabilistic knowledge (i.e., prior probability distributions
and observations), the probabilistic model represented by a probabilistic
program evolves. In this thesis, we first present a new approach, Icpp, which
is a data-flow-based incremental inference approach. By capturing the probabilistic
dependence of each data-flow fact and updating changed probabilities
sparsely, Icpp can incrementally compute new posterior distributions and
thus enable previously computed results to be reused.
For small changes at observed array data, upon which their probabilistic models
are conditioned, the probabilistic models remain unchanged. In this thesis, we
also present ISymb, which is a novel incremental symbolic inference framework.
By conducting an intra-procedurally path-sensitive analysis, except for "meets-over-all-paths" analysis within an iteration of a loop (conditioned on some
observed array data), ISymb captures the probability distribution for each
path and only recomputes the probability distributions for the affected paths.
Further, ISymb enables a precision-preserving incremental symbolic inference
to run significantly faster than its non-incremental counterparts.
In this thesis, we evaluate both Icpp and ISymb against the state-of-the-art
data-flow-based inference and symbolic inference, respectively. The results demonstrate
that both Icpp and ISymb meet their design goals. For example, Icpp
succeeds in making data-flow-based incremental inference possible in probabilistic
programs when some probabilistic knowledge undergoes small yet frequent changes.
Additionally, ISymb enables symbolic inference to perform one or two orders of
magnitude faster than non-incremental inference when some observed array dat
Synthesizing Program Input Grammars
We present an algorithm for synthesizing a context-free grammar encoding the
language of valid program inputs from a set of input examples and blackbox
access to the program. Our algorithm addresses shortcomings of existing grammar
inference algorithms, which both severely overgeneralize and are prohibitively
slow. Our implementation, GLADE, leverages the grammar synthesized by our
algorithm to fuzz test programs with structured inputs. We show that GLADE
substantially increases the incremental coverage on valid inputs compared to
two baseline fuzzers
Liveness of Randomised Parameterised Systems under Arbitrary Schedulers (Technical Report)
We consider the problem of verifying liveness for systems with a finite, but
unbounded, number of processes, commonly known as parameterised systems.
Typical examples of such systems include distributed protocols (e.g. for the
dining philosopher problem). Unlike the case of verifying safety, proving
liveness is still considered extremely challenging, especially in the presence
of randomness in the system. In this paper we consider liveness under arbitrary
(including unfair) schedulers, which is often considered a desirable property
in the literature of self-stabilising systems. We introduce an automatic method
of proving liveness for randomised parameterised systems under arbitrary
schedulers. Viewing liveness as a two-player reachability game (between
Scheduler and Process), our method is a CEGAR approach that synthesises a
progress relation for Process that can be symbolically represented as a
finite-state automaton. The method is incremental and exploits both
Angluin-style L*-learning and SAT-solvers. Our experiments show that our
algorithm is able to prove liveness automatically for well-known randomised
distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher
Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon
Protocol). To the best of our knowledge, this is the first fully-automatic
method that can prove liveness for randomised protocols.Comment: Full version of CAV'16 pape
Bounded Expectations: Resource Analysis for Probabilistic Programs
This paper presents a new static analysis for deriving upper bounds on the
expected resource consumption of probabilistic programs. The analysis is fully
automatic and derives symbolic bounds that are multivariate polynomials of the
inputs. The new technique combines manual state-of-the-art reasoning techniques
for probabilistic programs with an effective method for automatic
resource-bound analysis of deterministic programs. It can be seen as both, an
extension of automatic amortized resource analysis (AARA) to probabilistic
programs and an automation of manual reasoning for probabilistic programs that
is based on weakest preconditions. As a result, bound inference can be reduced
to off-the-shelf LP solving in many cases and automatically-derived bounds can
be interactively extended with standard program logics if the automation fails.
Building on existing work, the soundness of the analysis is proved with respect
to an operational semantics that is based on Markov decision processes. The
effectiveness of the technique is demonstrated with a prototype implementation
that is used to automatically analyze 39 challenging probabilistic programs and
randomized algorithms. Experimental results indicate that the derived constant
factors in the bounds are very precise and even optimal for many programs
Transfer Function Synthesis without Quantifier Elimination
Traditionally, transfer functions have been designed manually for each
operation in a program, instruction by instruction. In such a setting, a
transfer function describes the semantics of a single instruction, detailing
how a given abstract input state is mapped to an abstract output state. The net
effect of a sequence of instructions, a basic block, can then be calculated by
composing the transfer functions of the constituent instructions. However,
precision can be improved by applying a single transfer function that captures
the semantics of the block as a whole. Since blocks are program-dependent, this
approach necessitates automation. There has thus been growing interest in
computing transfer functions automatically, most notably using techniques based
on quantifier elimination. Although conceptually elegant, quantifier
elimination inevitably induces a computational bottleneck, which limits the
applicability of these methods to small blocks. This paper contributes a method
for calculating transfer functions that finesses quantifier elimination
altogether, and can thus be seen as a response to this problem. The
practicality of the method is demonstrated by generating transfer functions for
input and output states that are described by linear template constraints,
which include intervals and octagons.Comment: 37 pages, extended version of ESOP 2011 pape
Software tools for the cognitive development of autonomous robots
Robotic systems are evolving towards higher degrees of autonomy. This paper reviews the cognitive tools available nowadays for the fulfilment of abstract or long-term goals as well as for learning and modifying their behaviour.Peer ReviewedPostprint (author's final draft
- …