30 research outputs found

    Augmenting Source Code Lines with Sample Variable Values

    Full text link
    Source code is inherently abstract, which makes it difficult to understand. Activities such as debugging can reveal concrete runtime details, including the values of variables. However, they require that a developer explicitly requests these data for a specific execution moment. We present a simple approach, RuntimeSamp, which collects sample variable values during normal executions of a program by a programmer. These values are then displayed in an ambient way at the end of each line in the source code editor. We discuss questions which should be answered for this approach to be usable in practice, such as how to efficiently record the values and when to display them. We provide partial answers to these questions and suggest future research directions

    Generating Method Documentation Using Concrete Values from Executions

    Get PDF
    There exist multiple automated approaches of source code documentation generation. They often describe methods in abstract terms, using the words contained in the static source code or code excerpts from repositories. In this paper, we introduce DynamiDoc - a simple yet effective automated documentation approach based on dynamic analysis. It traces the program being executed and records string representations of concrete argument values, a return value, and a target object state before and after each method execution. Then for every concerned method, it generates documentation sentences containing examples, such as "When called on [3, 1.2] with element = 3, the object changed to [1.2]". A qualitative evaluation is performed, listing advantages and shortcomings of the approach

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    From scripts to specifications: the evolution of a flight software testing effort

    Get PDF
    The research described in this publication was carried out at the Jet Propulsion Laboratory

    Dealing with Variability in Process-aware Information Systems: Language Requirements, Features, and Existing Proposals

    Get PDF
    The increasing adoption of Process-aware Information Systems (PAISs), together with the variability of Business Processes (BPs) across different application contexts, has resulted in large process model repositories with collections of related process model variants. To reduce both costs and occurrence of errors, the explicit management of variability throughout the BP lifecycle becomes crucial. In literature, several proposals dealing with BP variability have been proposed. However, the lack of a method for their systematic comparison makes it difficult to select the most appropriate one meeting current needs best. To close this gap, this work presents an evaluation framework that allows analyzing and comparing the variability support provided by existing proposals developed in the context of BP variability. The framework encompasses a set of language requirements as well as a set of variability support features. While language requirements allow assessing the expressiveness required to explicitly represent variability of different process perspectives, variability support features reflect the tool support required to properly cover such expressiveness. Our evaluation framework has been derived based on an in-depth analysis of several large real-world process scenarios, an extensive literature review, and an analysis of existing PAISs. In this vein, the framework helps to understand BP variability along the BP lifecycle. In addition, it supports PAISs engineers in deciding, which of the existing BP variability proposals meets best their needs
    corecore