7,145 research outputs found

    Recovering Grammar Relationships for the Java Language Specification

    Get PDF
    Grammar convergence is a method that helps discovering relationships between different grammars of the same language or different language versions. The key element of the method is the operational, transformation-based representation of those relationships. Given input grammars for convergence, they are transformed until they are structurally equal. The transformations are composed from primitive operators; properties of these operators and the composed chains provide quantitative and qualitative insight into the relationships between the grammars at hand. We describe a refined method for grammar convergence, and we use it in a major study, where we recover the relationships between all the grammars that occur in the different versions of the Java Language Specification (JLS). The relationships are represented as grammar transformation chains that capture all accidental or intended differences between the JLS grammars. This method is mechanized and driven by nominal and structural differences between pairs of grammars that are subject to asymmetric, binary convergence steps. We present the underlying operator suite for grammar transformation in detail, and we illustrate the suite with many examples of transformations on the JLS grammars. We also describe the extraction effort, which was needed to make the JLS grammars amenable to automated processing. We include substantial metadata about the convergence process for the JLS so that the effort becomes reproducible and transparent

    Program Derivation by Correctness Enhacements

    Full text link
    Relative correctness is the property of a program to be more-correct than another program with respect to a given specification. Among the many properties of relative correctness, that which we found most intriguing is the property that program P' refines program P if and only if P' is more-correct than P with respect to any specification. This inspires us to reconsider program derivation by successive refinements: each step of this process mandates that we transform a program P into a program P' that refines P, i.e. P' is more-correct than P with respect to any specification. This raises the question: why should we want to make P' more-correct than P with respect to any specification, when we only have to satisfy specification R? In this paper, we discuss a process of program derivation that replaces traditional sequence of refinement-based correctness-preserving transformations starting from specification R by a sequence of relative correctness-based correctness-enhancing transformations starting from abort.Comment: In Proceedings Refine'15, arXiv:1606.0134

    The Knowledge-Based Software Assistant: Beyond CASE

    Get PDF
    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    Requirements traceability in model-driven development: Applying model and transformation conformance

    Get PDF
    The variety of design artifacts (models) produced in a model-driven design process results in an intricate relationship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship, which helps in assessing the quality of models, realizations and transformation specifications. Our framework is a basis for understanding requirements traceability in model-driven development, as well as for the design of tools that support requirements traceability in model-driven development processes. We propose a notion of conformance between application models which reduces the effort needed for assessment activities. We discuss how this notion of conformance can be integrated with model transformations

    Trustworthy Refactoring via Decomposition and Schemes: A Complex Case Study

    Get PDF
    Widely used complex code refactoring tools lack a solid reasoning about the correctness of the transformations they implement, whilst interest in proven correct refactoring is ever increasing as only formal verification can provide true confidence in applying tool-automated refactoring to industrial-scale code. By using our strategic rewriting based refactoring specification language, we present the decomposition of a complex transformation into smaller steps that can be expressed as instances of refactoring schemes, then we demonstrate the semi-automatic formal verification of the components based on a theoretical understanding of the semantics of the programming language. The extensible and verifiable refactoring definitions can be executed in our interpreter built on top of a static analyser framework.Comment: In Proceedings VPT 2017, arXiv:1708.0688

    Branch-coverage testability transformation for unstructured programs

    Get PDF
    Test data generation by hand is a tedious, expensive and error-prone activity, yet testing is a vital part of the development process. Several techniques have been proposed to automate the generation of test data, but all of these are hindered by the presence of unstructured control flow. This paper addresses the problem using testability transformation. Testability transformation does not preserve the traditional meaning of the program, rather it deals with preserving test-adequate sets of input data. This requires new equivalence relations which, in turn, entail novel proof obligations. The paper illustrates this using the branch coverage adequacy criterion and develops a branch adequacy equivalence relation and a testability transformation for restructuring. It then presents a proof that the transformation preserves branch adequacy

    The knowledge-based software assistant

    Get PDF
    Where the Knowledge Based Software Assistant (KBSA) is now, four years after the initial report, is discussed. Also described is what the Rome Air Development Center expects at the end of the first contract iteration. What the second and third contract iterations will look like are characterized

    Views, Program Transformations, and the Evolutivity Problem in a Functional Language

    Get PDF
    We report on an experience to support multiple views of programs to solve the tyranny of the dominant decomposition in a functional setting. We consider two possible architectures in Haskell for the classical example of the expression problem. We show how the Haskell Refactorer can be used to transform one view into the other, and the other way back. That transformation is automated and we discuss how the Haskell Refactorer has been adapted to be able to support this automated transformation. Finally, we compare our implementation of views with some of the literature.Comment: 19 page

    Invertible Program Restructurings for Continuing Modular Maintenance

    Get PDF
    When one chooses a main axis of structural decompostion for a software, such as function- or data-oriented decompositions, the other axes become secondary, which can be harmful when one of these secondary axes becomes of main importance. This is called the tyranny of the dominant decomposition. In the context of modular extension, this problem is known as the Expression Problem and has found many solutions, but few solutions have been proposed in a larger context of modular maintenance. We solve the tyranny of the dominant decomposition in maintenance with invertible program transformations. We illustrate this on the typical Expression Problem example. We also report our experiments with Java and Haskell programs and discuss the open problems with our approach.Comment: 6 pages, Early Research Achievements Track; 16th European Conference on Software Maintenance and Reengineering (CSMR 2012), Szeged : Hungary (2012
    corecore