28 research outputs found

    A model for inter-module analysis and optimizing compilation

    Full text link
    Recent research into the implementation of logic programming languages has demonstrated that global program analysis can be used to speed up execution by an order of magnitude. However, currently such global program analysis requires the program to be analysed as a whole: sepárate compilation of modules is not supported. We describe and empirically evalúate a simple model for extending global program analysis to support sepárate compilation of modules. Importantly, our model supports context-sensitive program analysis and multi-variant specialization of procedures in the modules

    Principal typings and type inference

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 115-120) and index.by Trevor Jim.Ph.D

    Safeness of make-based incremental recompilation

    Get PDF

    State of the Art Smart Grid Laboratories - A Survey about Software Use:RTLabOS D1.2

    Get PDF

    Purely top-down software rebuilding

    Get PDF
    Software rebuilding is the process of deriving a deployable software system from its primitive source objects. A build tool helps maintain consistency between the derived objects and source objects by ensuring that all necessary build steps are re-executed in the correct order after a set of changes is made to the source objects. It is imperative that derived objects accurately represent the source objects from which they were supposedly constructed; otherwise, subsequent testing and quality assurance is invalidated. This thesis aims to advance the state-of-the-art in tool support for automated software rebuilding. It surveys the body of background work, lays out a set of design considerations for build tools, and examines areas where current tools are limited. It examines the properties of a next-generation tool concept, redo, conceived by D. J. Bernstein; redo is novel because it employs a purely top-down approach to software rebuilding that promises to be simpler, more flexible, and more reliable than current approaches. The details of a redo prototype written by the author of this thesis are explained including the central algorithms and data structures. Lastly, the redo prototype is evaluated on some sample software systems with respect to migration effort between build tools as well as size, complexity, and performances aspects of the resulting build systems

    Incremental compilation and deployment for OutSystems Platform

    Get PDF
    OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario

    Doctor of Philosophy

    Get PDF
    dissertationAlmost all high performance computing applications are written in MPI, which will continue to be the case for at least the next several years. Given the huge and growing importance of MPI, and the size and sophistication of MPI codes, scalable and incisive MPI debugging tools are essential. Existing MPI debugging tools have, despite their strengths, many glaring de ficiencies, especially when it comes to debugging under the presence of nondeterminism related bugs, which are bugs that do not always show up during testing. These bugs usually become manifest when the systems are ported to di fferent platforms for production runs. This dissertation focuses on the problem of developing scalable dynamic verifi cation tools for MPI programs that can provide a coverage guarantee over the space of MPI nondeterminism. That is, the tools should be able to detect diff erent outcomes of nondeterministic events in an MPI program and enforce all those di fferent outcomes through repeated executions of the program with the same test harness. We propose to achieve the coverage guarantee by introducing efficient distributed causality tracking protocols that are based on the matches-before order. The matches-before order is introduced to address the shortcomings of the Lamport happens-before order [40], which is not sufficient to capture causality for MPI program executions due to the complexity of the MPI semantics. The two protocols we propose are the Lazy Lamport Clocks Protocol (LLCP) and the Lazy Vector Clocks Protocol (LVCP). LLCP provides good scalability with a small possibility of missing potential outcomes of nondeterministic events while LVCP provides full coverage guarantee with a scalability tradeoff . In practice, we show through our experiments that LLCP provides the same coverage as LVCP. This thesis makes the following contributions: •The MPI matches-before order that captures the causality between MPI events in an MPI execution. • Two distributed causality tracking protocols for MPI programs that rely on the matches-before order. • A Distributed Analyzer for MPI programs (DAMPI), which implements the two aforementioned protocols to provide scalable and modular dynamic verifi cation for MPI programs. • Scalability enhancement through algorithmic improvements for ISP, a dynamic verifi er for MPI programs
    corecore