372,240 research outputs found

    Flexible and Configurable Verification Policies with Omnibus

    Get PDF
    The three main assertion-based verification approaches are: run-time assertion checking(RAC), extended static checking (ESC) and full formal verification (FFV). Each approach offers a different balance between rigour and ease of use, making them appropriate in different situations. Our goal is to explore the use of these approaches together in a flexible way, enabling an application to be broken down into parts with different reliability requirements and different verification approaches used in each part. We explain the benefits of using the approaches together, present a set of guidelines to avoid potential conflicts and give an overview of how the Omnibus IDE provides support for the full range of assertion-based verification approaches within a single tool

    PyFrac: A planar 3D hydraulic fracture simulator

    Full text link
    Fluid driven fractures propagate in the upper earth crust either naturally or in response to engineered fluid injections. The quantitative prediction of their evolution is critical in order to better understand their dynamics as well as to optimize their creation. We present a Python implementation of an open-source hydraulic fracture propagation simulator based on the implicit level set algorithm originally developed by Peirce & Detournay (2008) -- "An implicit level set method for modeling hydraulically driven fractures". Comp. Meth. Appl. Mech. Engng, (33-40):2858--2885. This algorithm couples a finite discretization of the fracture with the use of the near tip asymptotic solutions of a steadily propagating semi-infinite hydraulic fracture. This allows to resolve the multi-scale processes governing hydraulic fracture growth accurately, even with relatively coarse meshes. We present an overview of the mathematical formulation, the numerical scheme and the details of our implementation. A series of problems including a radial hydraulic fracture verification benchmark, the propagation of a height contained hydraulic fracture, the lateral spreading of a magmatic dyke and the handling of fracture closure are presented to demonstrate the capabilities, accuracy and robustness of the implemented algorithm

    Proving theorems by program transformation

    Get PDF
    In this paper we present an overview of the unfold/fold proof method, a method for proving theorems about programs, based on program transformation. As a metalanguage for specifying programs and program properties we adopt constraint logic programming (CLP), and we present a set of transformation rules (including the familiar unfolding and folding rules) which preserve the semantics of CLP programs. Then, we show how program transformation strategies can be used, similarly to theorem proving tactics, for guiding the application of the transformation rules and inferring the properties to be proved. We work out three examples: (i) the proof of predicate equivalences, applied to the verification of equality between CCS processes, (ii) the proof of first order formulas via an extension of the quantifier elimination method, and (iii) the proof of temporal properties of infinite state concurrent systems, by using a transformation strategy that performs program specialization

    Towards Safer Smart Contracts: A Survey of Languages and Verification Methods

    Get PDF
    With a market capitalisation of over USD 205 billion in just under ten years, public distributed ledgers have experienced significant adoption. Apart from novel consensus mechanisms, their success is also accountable to smart contracts. These programs allow distrusting parties to enter agreements that are executed autonomously. However, implementation issues in smart contracts caused severe losses to the users of such contracts. Significant efforts are taken to improve their security by introducing new programming languages and advance verification methods. We provide a survey of those efforts in two parts. First, we introduce several smart contract languages focussing on security features. To that end, we present an overview concerning paradigm, type, instruction set, semantics, and metering. Second, we examine verification tools and methods for smart contract and distributed ledgers. Accordingly, we introduce their verification approach, level of automation, coverage, and supported languages. Last, we present future research directions including formal semantics, verified compilers, and automated verification

    The Political and Economic Costs of a Fully Verifiable Kyoto Protocol

    Get PDF
    Until now policy makers and researchers considered the problem of uncertainty and verification to be of minor importance for the Kyoto process. However, the first studies that recently appeared on uncertainty estimation of carbon accounting reveal that uncertainties of the reported emissions on the country level are large. In an environment of such large uncertainties, verification of emission reductions must be viewed as a crucial mechanism to secure the very functioning of the Protocol. There are at least four reasons why verification is important: (1) The political cost of no-verification is potentially very high. Under no-verification in 2012 we will have little trust in our knowledge of (a) What we did, and (b) Who did what between 1990 and 2010? (2) The Kyoto Protocol requires verifiability for inter alia trade (Article 17), hence overall country emissions must be verifiable; and (3) Non-verifiable emission reduction claims could lead to misconduct, putting the entire market process in danger. The reasons for this are: Asymmetric gains from biased reporting could lead to market disintegration; Kyoto provides perverse incentives to preserve an enlarge the "shadow carbon economy"; and, Uncertainty of supply of emission reductions leads to less predictable market conditions and economic efficiency losses. (4) Scientific proof of the true environmental benefits of the Protocol is at least delayed. Since the issue of uncertainty has been ignored for a long time, the institutional basis for verification is still very weak. Currently, the institutional set-up is such that we face a situation where: there are no rules and instruments to secure verifiable emission reduction claims; and, a sufficiently strong and independent body to police uncertainties is not installed. In this paper, we provide a set of tools to strategically deal with the problem of uncertainty and verification under the Kyoto Protocol. We do this by: providing an overview of the instruments to deal with verification (no-, trend-, level-and top-down/bottom-up verification under PCA and FCA); compute costs scenarios for those instruments under various flexibility scenarios; and providing a short discussion on practical steps and crucial decisions to be made that lead to a more verifiable Protocol

    Estimation of distribution parameters as a tool for model-based system engineering and model identification

    Get PDF
    The estimation of the parameters of a probability distribution (e.g., moments) plays an important role both in the model-based system engineering (e.g., analysis and verification through Statistical Model Checking (SMC)) and in the identification of parameters of predictive models (e.g., systems biology, social networks). The contribution of this PhD thesis is both on the algorithm side and on the modeling side. On the algorithm side, we overview a set of Monte Carlo-based Statistical Model Checking tools and algorithms for the verification of Cyber-Physical Systems, and we provide selection criteria for the verification problem at hand. Furthermore, we present an efficient Monte Carlo-based algorithm to estimate the expected value of a multivariate random variable, when marginal density functions are not known. We prove the correctness of our algorithm, we give an Upper Bound and a Lower Bound to its complexity and we present experimental results confirming our evaluations. On the modeling side, we present a mechanistic and identifiable model to predict, at the node level and at a set of nodes level, the expected value of the retweeting rate of a message inside a social network, at a certain time. Our model parameters are random variables, whose distribution parameters are estimated from an available dataset. We experimentally show that our model reliably predicts both the qualitative and the quantitative time behavior of retweeting rates. This is confirmed by the high correlation between the predicted and the observed data. These results enable a simulation-based analysis of users or of a set of users' behaviors inside a network

    Integrating bottom-up and top-down reasoning in COLAB

    Get PDF
    The knowledge compilation laboratory COLAB integrates declarative knowledge representation formalisms, providing source-to-source and source-to-code compilers of various knowledge types. Its architecture separates taxonomical and assertional knowledge. The assertional component consists of a constraint system and a rule system, which supports bottom-up and top-down reasoning of Horn clauses. Two approaches for forward reasoning have been implemented. The first set-oriented approach uses a fixpoint computation. It allows top-down verification of selected premises. Goal-directed bottom-up reasoning is achieved by a magic-set transformation of the rules with respect to a goal. The second tuple-oriented approach reasons forward to derive the consequences of an explicitly given set of facts. This is achieved by a transformation of the rules to top-down executable Horn clauses. The paper gives an overview of the various forward reasoning approaches, their compilation into an abstract machine and their integration into the COLAB shell
    corecore