4,389 research outputs found

    Distilling programs for verification

    Get PDF
    In this paper, we show how our program transformation algorithm called distillation can not only be used for the optimisation of programs, but can also be used to facilitate program verification. Using the distillation algorithm, programs are transformed into a specialised form in which functions are tail recursive, and very few intermediate structures are created. We then show how properties of this specialised form of program can be easily verified by the application of inductive proof rules. We therefore argue that the distillation algorithm is an ideal candidate for inclusion within compilers as it facilitates the two goals of program optimization and verification

    Verifying Temporal Properties of Reactive Systems by Transformation

    Full text link
    We show how program transformation techniques can be used for the verification of both safety and liveness properties of reactive systems. In particular, we show how the program transformation technique distillation can be used to transform reactive systems specified in a functional language into a simplified form that can subsequently be analysed to verify temporal properties of the systems. Example systems which are intended to model mutual exclusion are analysed using these techniques with respect to both safety (mutual exclusion) and liveness (non-starvation), with the errors they contain being correctly identified.Comment: In Proceedings VPT 2015, arXiv:1512.02215. This work was supported, in part, by Science Foundation Ireland grant 10/CE/I1855 to Lero - the Irish Software Engineering Research Centre (www.lero.ie), and by the School of Computing, Dublin City Universit

    What Works at Scale? Distilling the Critical Success Factors for Scaling Up Rural Sanitation

    Get PDF
    This paper is based on the Knowledge Sharing Forum of the same name. It examines the conditions for success in sanitation programs and strategies that lead to robust implementation in various countries

    Verifiable Reinforcement Learning via Policy Extraction

    Full text link
    While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy
    corecore