102,817 research outputs found

    Conditional Model Checking

    Full text link
    Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P ---usually a state predicate--- such that the program satisfies the specification under the condition P ---that is, as long as the program does not leave states in which P is satisfied. We are of course interested in model checkers that return conditions P that are as weak as possible. Instead of outcome (1), the model checker will return P = true; instead of (2), the condition P will return the part of the state space that satisfies the specification; and in case (3), the condition P can summarize the work that has been performed by the model checker before space-out, time-out, or giving up. If complete verification is necessary, then a different verification method or tool may be used to focus on the states that violate the condition. We give such conditions as input to a conditional model checker, such that the verification problem is restricted to the part of the state space that satisfies the condition. Our experiments show that repeated application of conditional model checkers, using different conditions, can significantly improve the verification results, state-space coverage, and performance.Comment: 14 pages, 8 figures, 3 table

    Incremental Quantitative Analysis on Dynamic Costs

    Full text link
    In quantitative program analysis, values are assigned to execution traces to represent a quality measure. Such analyses cover important applications, e.g. resource usage. Examining all traces is well known to be intractable and therefore traditional algorithms reason over an over-approximated set. Typically, inaccuracy arises due to inclusion of infeasible paths in this set. Thus path-sensitivity is one cure. However, there is another reason for the inaccuracy: that the cost model, i.e., the way in which the analysis of each trace is quantified, is dynamic. That is, the cost of a trace is dependent on the context in which the trace is executed. Thus the goal of accurate analysis, already challenged by path-sensitivity, is now further challenged by context-sensitivity. In this paper, we address the problem of quantitative analysis defined over a dynamic cost model. Our algorithm is an "anytime" algorithm: it generates an answer quickly, but if the analysis resource budget allows, it progressively produces better solutions via refinement iterations. The result of each iteration remains sound, but importantly, must converge to an exact analysis when given an unlimited resource budget. In order to be scalable, our algorithm is designed to be incremental. We finally give evidence that a new level of practicality is achieved by an evaluation on a realistic collection of benchmarks

    The continuum limit of loop quantum gravity - a framework for solving the theory

    Full text link
    The construction of a continuum limit for the dynamics of loop quantum gravity is unavoidable to complete the theory. We explain that such a construction is equivalent to obtaining the continuum physical Hilbert space, which encodes the solutions of the theory. We present iterative coarse graining methods to construct physical states in a truncation scheme and explain in which sense this scheme represents a renormalization flow. We comment on the role of diffeomorphism symmetry as an indicator for the continuum limit.Comment: draft chapter for a volume edited by A. Ashtekar and J. Pullin, to be published in the World Scientific series "100 Years of General Relativity", v2: small changes and updated reference

    Refining Existential Properties in Separation Logic Analyses

    Full text link
    In separation logic program analyses, tractability is generally achieved by restricting invariants to a finite abstract domain. As this domain cannot vary, loss of information can cause failure even when verification is possible in the underlying logic. In this paper, we propose a CEGAR-like method for detecting spurious failures and avoiding them by refining the abstract domain. Our approach is geared towards discovering existential properties, e.g. "list contains value x". To diagnose failures, we use abduction, a technique for inferring command preconditions. Our method works backwards from an error, identifying necessary information lost by abstraction, and refining the forward analysis to avoid the error. We define domains for several classes of existential properties, and show their effectiveness on case studies adapted from Redis, Azureus and FreeRTOS

    Context-Updates Analysis and Refinement in Chisel

    Full text link
    This paper presents the context-updates synthesis component of Chisel--a tool that synthesizes a program slicer directly from a given algebraic specification of a programming language operational semantics. (By context-updates we understand programming language constructs such as goto instructions or function calls.) The context-updates synthesis follows two directions: an overapproximating phase that extracts a set of potential context-update constructs and an underapproximating phase that refines the results of the first step by testing the behaviour of the context-updates constructs produced at the previous phase. We use two experimental semantics that cover two types of language paradigms: high-level imperative and low-level assembly languages and we conduct the tests on standard benchmarks used in avionics.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    What's the Over/Under? Probabilistic Bounds on Information Leakage

    Full text link
    Quantitative information flow (QIF) is concerned with measuring how much of a secret is leaked to an adversary who observes the result of a computation that uses it. Prior work has shown that QIF techniques based on abstract interpretation with probabilistic polyhedra can be used to analyze the worst-case leakage of a query, on-line, to determine whether that query can be safely answered. While this approach can provide precise estimates, it does not scale well. This paper shows how to solve the scalability problem by augmenting the baseline technique with sampling and symbolic execution. We prove that our approach never underestimates a query's leakage (it is sound), and detailed experimental results show that we can match the precision of the baseline technique but with orders of magnitude better performance

    Verification Artifacts in Cooperative Verification: Survey and Unifying Component Framework

    Full text link
    The goal of cooperative verification is to combine verification approaches in such a way that they work together to verify a system model. In particular, cooperative verifiers provide exchangeable information (verification artifacts) to other verifiers or consume such information from other verifiers with the goal of increasing the overall effectiveness and efficiency of the verification process. This paper first gives an overview over approaches for leveraging strengths of different techniques, algorithms, and tools in order to increase the power and abilities of the state of the art in software verification. Second, we specifically outline cooperative verification approaches and discuss their employed verification artifacts. We formalize all artifacts in a uniform way, thereby fixing their semantics and providing verifiers with a precise meaning of the exchanged information.Comment: 22 pages, 12 figure

    Renormalization of symmetry restricted spin foam models with curvature in the asymptotic regime

    Full text link
    We study the renormalization group flow of the Euclidean Engle-Pereira-Rovelli-Livine and Freidel-Krasnov (EPRL-FK) spin foam model in the large-jj-limit. The vertex amplitude is deformed to include a cosmological constant term. The state sum is reduced to describe a foliated spacetime whose spatial slices are flat, isotropic and homogeneous. The model admits a non-vanishing extrinsic curvature whereas the scale factor can expand or contract at successive time steps. The reduction of degrees of freedom allows a numerical evaluation of certain geometric observables on coarser and finer discretizations. Their comparison defines the renormalization group (RG) flow of the model in the parameters (α,Λ,G)(\alpha,\Lambda,G). We first consider the projection of the RG flow along the α\alpha direction, which shows a UV-attractive fixed point. Then, we extend our analysis to two- and three-dimensional parameter spaces. Most notably, we find the indications of a fixed point in the (α,Λ,G)(\alpha,\Lambda,G) space showing one repulsive and two attractive directions.Comment: 24 pages, 13 figures; v2: references added, corrected counting of attractive / repulsive directions in RG flow; v3: matching published version (revised plots and more thorough discussion of background independent RG flow

    Symbolic Verification of Cache Side-channel Freedom

    Full text link
    Cache timing attacks allow third-party observers to retrieve sensitive information from program executions. But, is it possible to automatically check the vulnerability of a program against cache timing attacks and then, automatically shield program executions against these attacks? For a given program, a cache configuration and an attack model, our CACHEFIX framework either verifies the cache side-channel freedom of the program or synthesizes a series of patches to ensure cache side-channel freedom during program execution. At the core of our framework is a novel symbolic verification technique based on automated abstraction refinement of cache semantics. The power of such a framework is to allow symbolic reasoning over counterexample traces and to combine it with runtime monitoring for eliminating cache side channels during program execution. Our evaluation with routines from OpenSSL, libfixedtimefixedpoint, GDK and FourQlib libraries reveals that our CACHEFIX approach (dis)proves cache sidechannel freedom within an average of 75 seconds. Besides, in all except one case, CACHEFIX synthesizes all patches within 20 minutes to ensure cache side-channel freedom of the respective routines during execution

    Soft Contract Verification for Higher-Order Stateful Programs

    Full text link
    Software contracts allow programmers to state rich program properties using the full expressive power of an object language. However, since they are enforced at runtime, monitoring contracts imposes significant overhead and delays error discovery. So contract verification aims to guarantee all or most of these properties ahead of time, enabling valuable optimizations and yielding a more general assurance of correctness. Existing methods for static contract verification satisfy the needs of more restricted target languages, but fail to address the challenges unique to those conjoining untyped, dynamic programming, higher-order functions, modularity, and statefulness. Our approach tackles all these features at once, in the context of the full Racket system---a mature environment for stateful, higher-order, multi-paradigm programming with or without types. Evaluating our method using a set of both pure and stateful benchmarks, we are able to verify 99.94% of checks statically (all but 28 of 49, 861). Stateful, higher-order functions pose significant challenges for static contract verification in particular. In the presence of these features, a modular analysis must permit code from the current module to escape permanently to an opaque context (unspecified code from outside the current module) that may be stateful and therefore store a reference to the escaped closure. Also, contracts themselves, being predicates wri en in unrestricted Racket, may exhibit stateful behavior; a sound approach must be robust to contracts which are arbitrarily expressive and interwoven with the code they monitor. In this paper, we present and evaluate our solution based on higher-order symbolic execution, explain the techniques we used to address such thorny issues, formalize a notion of behavioral approximation, and use it to provide a mechanized proof of soundness.Comment: ACM SIGPLAN Symposium on Principles of Programming Language (POPL
    corecore