30,566 research outputs found

    Bridging boolean and quantitative synthesis using smoothed proof search

    Get PDF
    We present a new technique for parameter synthesis under boolean and quantitative objectives. The input to the technique is a "sketch" --- a program with missing numerical parameters --- and a probabilistic assumption about the program's inputs. The goal is to automatically synthesize values for the parameters such that the resulting program satisfies: (1) a {boolean specification}, which states that the program must meet certain assertions, and (2) a {quantitative specification}, which assigns a real valued rating to every program and which the synthesizer is expected to optimize. Our method --- called smoothed proof search --- reduces this task to a sequence of unconstrained smooth optimization problems that are then solved numerically. By iteratively solving these problems, we obtain parameter values that get closer and closer to meeting the boolean specification; at the limit, we obtain values that provably meet the specification. The approximations are computed using a new notion of smoothing for program abstractions, where an abstract transformer is approximated by a function that is continuous according to a metric over abstract states. We present a prototype implementation of our synthesis procedure, and experimental results on two benchmarks from the embedded control domain. The experiments demonstrate the benefits of smoothed proof search over an approach that does not meet the boolean and quantitative synthesis goals simultaneously.National Science Foundation (U.S.) (NSF Award #1162076

    Computer aided synthesis: a game theoretic approach

    Full text link
    In this invited contribution, we propose a comprehensive introduction to game theory applied in computer aided synthesis. In this context, we give some classical results on two-player zero-sum games and then on multi-player non zero-sum games. The simple case of one-player games is strongly related to automata theory on infinite words. All along the article, we focus on general approaches to solve the studied problems, and we provide several illustrative examples as well as intuitions on the proofs.Comment: Invitation contribution for conference "Developments in Language Theory" (DLT 2017

    Minimizing Expected Cost Under Hard Boolean Constraints, with Applications to Quantitative Synthesis

    Get PDF
    In Boolean synthesis, we are given an LTL specification, and the goal is to construct a transducer that realizes it against an adversarial environment. Often, a specification contains both Boolean requirements that should be satisfied against an adversarial environment, and multi-valued components that refer to the quality of the satisfaction and whose expected cost we would like to minimize with respect to a probabilistic environment. In this work we study, for the first time, mean-payoff games in which the system aims at minimizing the expected cost against a probabilistic environment, while surely satisfying an ω\omega-regular condition against an adversarial environment. We consider the case the ω\omega-regular condition is given as a parity objective or by an LTL formula. We show that in general, optimal strategies need not exist, and moreover, the limit value cannot be approximated by finite-memory strategies. We thus focus on computing the limit-value, and give tight complexity bounds for synthesizing ϵ\epsilon-optimal strategies for both finite-memory and infinite-memory strategies. We show that our game naturally arises in various contexts of synthesis with Boolean and multi-valued objectives. Beyond direct applications, in synthesis with costs and rewards to certain behaviors, it allows us to compute the minimal sensing cost of ω\omega-regular specifications -- a measure of quality in which we look for a transducer that minimizes the expected number of signals that are read from the input

    Probabilistic Model Checking for Energy Analysis in Software Product Lines

    Full text link
    In a software product line (SPL), a collection of software products is defined by their commonalities in terms of features rather than explicitly specifying all products one-by-one. Several verification techniques were adapted to establish temporal properties of SPLs. Symbolic and family-based model checking have been proven to be successful for tackling the combinatorial blow-up arising when reasoning about several feature combinations. However, most formal verification approaches for SPLs presented in the literature focus on the static SPLs, where the features of a product are fixed and cannot be changed during runtime. This is in contrast to dynamic SPLs, allowing to adapt feature combinations of a product dynamically after deployment. The main contribution of the paper is a compositional modeling framework for dynamic SPLs, which supports probabilistic and nondeterministic choices and allows for quantitative analysis. We specify the feature changes during runtime within an automata-based coordination component, enabling to reason over strategies how to trigger dynamic feature changes for optimizing various quantitative objectives, e.g., energy or monetary costs and reliability. For our framework there is a natural and conceptually simple translation into the input language of the prominent probabilistic model checker PRISM. This facilitates the application of PRISM's powerful symbolic engine to the operational behavior of dynamic SPLs and their family-based analysis against various quantitative queries. We demonstrate feasibility of our approach by a case study issuing an energy-aware bonding network device.Comment: 14 pages, 11 figure
    • …
    corecore