39 research outputs found
Automated Experiment Design for Data-Efficient Verification of Parametric Markov Decision Processes
We present a new method for statistical verification of quantitative
properties over a partially unknown system with actions, utilising a
parameterised model (in this work, a parametric Markov decision process) and
data collected from experiments performed on the underlying system. We obtain
the confidence that the underlying system satisfies a given property, and show
that the method uses data efficiently and thus is robust to the amount of data
available. These characteristics are achieved by firstly exploiting parameter
synthesis to establish a feasible set of parameters for which the underlying
system will satisfy the property; secondly, by actively synthesising
experiments to increase amount of information in the collected data that is
relevant to the property; and finally propagating this information over the
model parameters, obtaining a confidence that reflects our belief whether or
not the system parameters lie in the feasible set, thereby solving the
verification problem.Comment: QEST 2017, 18 pages, 7 figure
Reinforcement Learning and Data-Generation for Syntax-Guided Synthesis
Program synthesis is the task of automatically generating code based on a
specification. In Syntax-Guided Synthesis (SyGuS) this specification is a
combination of a syntactic template and a logical formula, and the result is
guaranteed to satisfy both.
We present a reinforcement-learning guided algorithm for SyGuS which uses
Monte-Carlo Tree Search (MCTS) to search the space of candidate solutions. Our
algorithm learns policy and value functions which, combined with the upper
confidence bound for trees, allow it to balance exploration and exploitation. A
common challenge in applying machine learning approaches to syntax-guided
synthesis is the scarcity of training data. To address this, we present a
method for automatically generating training data for SyGuS based on
anti-unification of existing first-order satisfiability problems, which we use
to train our MCTS policy. We implement and evaluate this setup and demonstrate
that learned policy and value improve the synthesis performance over a baseline
by over 26 percentage points in the training and testing sets. Our tool
outperforms state-of-the-art tool cvc5 on the training set and performs
comparably in terms of the total number of problems solved on the testing set
(solving 23% of the benchmarks on which cvc5 fails). We make our data set
publicly available, to enable further application of machine learning methods
to the SyGuS problem
Reinforcement Learning and Data-Generation for Syntax-Guided Synthesis
Program synthesis is the task of automatically generating code based on a specification. In Syntax-Guided Synthesis (SyGuS) this specification is a combination of a syntactic template and a logical formula, and the result is guaranteed to satisfy both. We present a reinforcement-learning guided algorithm for SyGuS which uses Monte-Carlo Tree Search (MCTS) to search the space of candidate solutions. Our algorithm learns policy and value functions which, combined with the upper confidence bound for trees, allow it to balance exploration and exploitation. A common challenge in applying machine learning approaches to syntax-guided synthesis is the scarcity of training data. To address this, we present a method for automatically generating training data for SyGuS based on anti-unification of existing first-order satisfiability problems, which we use to train our MCTS policy. We implement and evaluate this setup and demonstrate that learned policy and value improve the synthesis performance over a baseline by over 26 percentage points in the training and testing sets. Our tool outperforms state-of-the-art tool cvc5 on the training set and performs comparably in terms of the total number of problems solved on the testing set (solving 23% of the benchmarks on which cvc5 fails). We make our data set publicly available, to enable further application of machine learning methods to the SyGuS problem
Satisfiability and Synthesis Modulo Oracles
In classic program synthesis algorithms, such as counterexample-guided
inductive synthesis (CEGIS), the algorithms alternate between a synthesis phase
and an oracle (verification) phase. Many synthesis algorithms use a white-box
oracle based on satisfiability modulo theory (SMT) solvers to provide
counterexamples. But what if a white-box oracle is either not available or not
easy to work with? We present a framework for solving a general class of
oracle-guided synthesis problems which we term synthesis modulo oracles. In
this setting, oracles may be black boxes with a query-response interface
defined by the synthesis problem. As a necessary component of this framework,
we also formalize the problem of satisfiability modulo theories and oracles,
and present an algorithm for solving this problem. We implement a prototype
solver for satisfiability and synthesis modulo oracles and demonstrate that, by
using oracles that execute functions not easily modeled in SMT-constraints,
such as recursive functions or oracles that incorporate compilation and
execution of code, SMTO and SyMO are able to solve problems beyond the
abilities of standard SMT and synthesis solvers.Comment: 12 pages, 8 Figure
Gradient Descent over Metagrammars for Syntax-Guided Synthesis
The performance of a syntax-guided synthesis algorithm is highly dependent on
the provision of a good syntactic template, or grammar. Provision of such a
template is often left to the user to do manually, though in the absence of
such a grammar, state-of-the-art solvers will provide their own default
grammar, which is dependent on the signature of the target program to be
sythesized. In this work, we speculate this default grammar could be improved
upon substantially. We build sets of rules, or metagrammars, for constructing
grammars, and perform a gradient descent over these metagrammars aiming to find
a metagrammar which solves more benchmarks and on average faster. We show the
resulting metagrammar enables CVC4 to solve 26% more benchmarks than the
default grammar within a 300s time-out, and that metagrammars learnt from tens
of benchmarks generalize to performance on 100s of benchmarks.Comment: 5 pages, SYNT 202
MedleySolver: Online SMT Algorithm Selection
Satisfiability modulo theories (SMT) solvers implement a wide range of optimizations that are often tailored to a particular class of problems, and that differ significantly between solvers. As a result, one solver may solve a query quickly while another might be flummoxed completely. Predicting the performance of a given solver is difficult for users of SMT-driven applications, particularly when the problems they have to solve do not fall neatly into a well-understood category. In this paper, we propose an online algorithm selection framework for SMT called MedleySolver that predicts the relative performances of a set of SMT solvers on a given query, distributes time amongst the solvers, and deploys the solvers in sequence until a solution is obtained. We evaluate MedleySolver against the best available alternative, an offline learning technique, in terms of pure performance and practical usability for a typical SMT user. We find that with no prior training, MedleySolver solves 93.9% of the queries solved by the virtual best solver selector achieving 59.8% of the par-2 score of the most successful individual solver, which solves 87.3%. For comparison, the best available alternative takes longer to train than MedleySolver takes to solve our entire set of 2000 queries