8,536 research outputs found

    Trustworthy Refactoring via Decomposition and Schemes: A Complex Case Study

    Get PDF
    Widely used complex code refactoring tools lack a solid reasoning about the correctness of the transformations they implement, whilst interest in proven correct refactoring is ever increasing as only formal verification can provide true confidence in applying tool-automated refactoring to industrial-scale code. By using our strategic rewriting based refactoring specification language, we present the decomposition of a complex transformation into smaller steps that can be expressed as instances of refactoring schemes, then we demonstrate the semi-automatic formal verification of the components based on a theoretical understanding of the semantics of the programming language. The extensible and verifiable refactoring definitions can be executed in our interpreter built on top of a static analyser framework.Comment: In Proceedings VPT 2017, arXiv:1708.0688

    Model-Checking Process Equivalences

    Full text link
    Process equivalences are formal methods that relate programs and system which, informally, behave in the same way. Since there is no unique notion of what it means for two dynamic systems to display the same behaviour there are a multitude of formal process equivalences, ranging from bisimulation to trace equivalence, categorised in the linear-time branching-time spectrum. We present a logical framework based on an expressive modal fixpoint logic which is capable of defining many process equivalence relations: for each such equivalence there is a fixed formula which is satisfied by a pair of processes if and only if they are equivalent with respect to this relation. We explain how to do model checking, even symbolically, for a significant fragment of this logic that captures many process equivalences. This allows model checking technology to be used for process equivalence checking. We show how partial evaluation can be used to obtain decision procedures for process equivalences from the generic model checking scheme.Comment: In Proceedings GandALF 2012, arXiv:1210.202

    Four Lessons in Versatility or How Query Languages Adapt to the Web

    Get PDF
    Exposing not only human-centered information, but machine-processable data on the Web is one of the commonalities of recent Web trends. It has enabled a new kind of applications and businesses where the data is used in ways not foreseen by the data providers. Yet this exposition has fractured the Web into islands of data, each in different Web formats: Some providers choose XML, others RDF, again others JSON or OWL, for their data, even in similar domains. This fracturing stifles innovation as application builders have to cope not only with one Web stack (e.g., XML technology) but with several ones, each of considerable complexity. With Xcerpt we have developed a rule- and pattern based query language that aims to give shield application builders from much of this complexity: In a single query language XML and RDF data can be accessed, processed, combined, and re-published. Though the need for combined access to XML and RDF data has been recognized in previous work (including the W3C’s GRDDL), our approach differs in four main aspects: (1) We provide a single language (rather than two separate or embedded languages), thus minimizing the conceptual overhead of dealing with disparate data formats. (2) Both the declarative (logic-based) and the operational semantics are unified in that they apply for querying XML and RDF in the same way. (3) We show that the resulting query language can be implemented reusing traditional database technology, if desirable. Nevertheless, we also give a unified evaluation approach based on interval labelings of graphs that is at least as fast as existing approaches for tree-shaped XML data, yet provides linear time and space querying also for many RDF graphs. We believe that Web query languages are the right tool for declarative data access in Web applications and that Xcerpt is a significant step towards a more convenient, yet highly efficient data access in a “Web of Data”

    A Semantic Similarity Measure for Expressive Description Logics

    Full text link
    A totally semantic measure is presented which is able to calculate a similarity value between concept descriptions and also between concept description and individual or between individuals expressed in an expressive description logic. It is applicable on symbolic descriptions although it uses a numeric approach for the calculus. Considering that Description Logics stand as the theoretic framework for the ontological knowledge representation and reasoning, the proposed measure can be effectively used for agglomerative and divisional clustering task applied to the semantic web domain.Comment: 13 pages, Appeared at CILC 2005, Convegno Italiano di Logica Computazionale also available at http://www.disp.uniroma2.it/CILC2005/downloads/papers/15.dAmato_CILC05.pd

    Set-Based Pre-Processing for Points-To Analysis

    Get PDF
    We present set-based pre-analysis: a virtually universal op- timization technique for flow-insensitive points-to analysis. Points-to analysis computes a static abstraction of how ob- ject values flow through a program’s variables. Set-based pre-analysis relies on the observation that much of this rea- soning can take place at the set level rather than the value level. Computing constraints at the set level results in sig- nificant optimization opportunities: we can rewrite the in- put program into a simplified form with the same essential points-to properties. This rewrite results in removing both local variables and instructions, thus simplifying the sub- sequent value-based points-to computation. E ectively, set- based pre-analysis puts the program in a normal form opti- mized for points-to analysis. Compared to other techniques for o -line optimization of points-to analyses in the literature, the new elements of our approach are the ability to eliminate statements, and not just variables, as well as its modularity: set-based pre-analysis can be performed on the input just once, e.g., allowing the pre-optimization of libraries that are subsequently reused many times and for di erent analyses. In experiments with Java programs, set-based pre-analysis eliminates 30% of the program’s local variables and 30% or more of computed context-sensitive points-to facts, over a wide set of bench- marks and analyses, resulting in a 20% average speedup (max: 110%, median: 18%)

    Data re-engineering using formal transformations

    Get PDF
    This thesis presents and analyses a solution to the problem of formally re- engineering program data structures, allowing new representations of a program to be developed. The work is based around Ward's theory of program transformations which uses a Wide Spectrum Language, WSL, whose semantics were specially developed for use in proof of program transformations. The re-engineered code exhibits equivalent functionality to the original but differs in the degree of data abstraction and representation. Previous transformational re-engineering work has concentrated upon control flow restructuring, which has highlighted a lack of support for data restructuring in the maintainer's tool-set. Problems have been encountered during program transformation due to the lack of support for data re-engineering. A lack of strict data semantics and manipulation capabilities has left the maintainer unable to produce optimally re-engineered solutions. It has also hindered the migration of programs into other languages because it has not been possible to convert data structures into an appropriate form in the target language. The main contribution of the thesis is the Data Re-Engineering and Abstraction Mechanism (DREAM) which allows theories about type equivalence to be represented and used in a re-engineering environment. DREAM is based around the technique of "ghosting", a way of introducing different representations of data, which provides the theoretical underpinning of the changes applied to the program. A second major contribution is the introduction of data typing into the WSL language. This allows DREAM to be integrated into the existing transformation theories within WSL. These theoretical extensions of the original work have been shown to be practically viable by implementation within a prototype transformation tool, the Maintainer's Assistant. The extended tool has been used to re-engineer heavily modified, commercial legacy code. The results of this have shown that useful re-engineering work can be performed and that DREAM integrates well with existing control flow transformations

    Verifying Monadic Second-Order Properties of Graph Programs

    Get PDF
    The core challenge in a Hoare- or Dijkstra-style proof system for graph programs is in defining a weakest liberal precondition construction with respect to a rule and a postcondition. Previous work addressing this has focused on assertion languages for first-order properties, which are unable to express important global properties of graphs such as acyclicity, connectedness, or existence of paths. In this paper, we extend the nested graph conditions of Habel, Pennemann, and Rensink to make them equivalently expressive to monadic second-order logic on graphs. We present a weakest liberal precondition construction for these assertions, and demonstrate its use in verifying non-local correctness specifications of graph programs in the sense of Habel et al.Comment: Extended version of a paper to appear at ICGT 201

    Dagstuhl News January - December 2002

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
    corecore