1,127 research outputs found

    Contract-Based General-Purpose GPU Programming

    Get PDF
    Using GPUs as general-purpose processors has revolutionized parallel computing by offering, for a large and growing set of algorithms, massive data-parallelization on desktop machines. An obstacle to widespread adoption, however, is the difficulty of programming them and the low-level control of the hardware required to achieve good performance. This paper suggests a programming library, SafeGPU, that aims at striking a balance between programmer productivity and performance, by making GPU data-parallel operations accessible from within a classical object-oriented programming language. The solution is integrated with the design-by-contract approach, which increases confidence in functional program correctness by embedding executable program specifications into the program text. We show that our library leads to modular and maintainable code that is accessible to GPGPU non-experts, while providing performance that is comparable with hand-written CUDA code. Furthermore, runtime contract checking turns out to be feasible, as the contracts can be executed on the GPU

    Optimized Temporal Monitors for SystemC

    Get PDF
    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers

    ART-Ada design project, phase 2

    Get PDF
    Interest in deploying expert systems in Ada has increased. An Ada based expert system tool is described called ART-Ada, which was built to support research into the language and methodological issues of expert systems in Ada. ART-Ada allows applications of an existing expert system tool called ART-IM (Automated Reasoning Tool for Information Management) to be deployed in various Ada environments. ART-IM, a C-based expert system tool, is used to generate Ada source code which is compiled and linked with an Ada based inference engine to produce an Ada executable image. ART-Ada is being used to implement several expert systems for NASA's Space Station Freedom Program and the U.S. Air Force

    Static versus Dynamic Verification in Why3, Frama-C and SPARK 2014

    Get PDF
    International audienceWhy3 is an environment for static verification, generic in the sense that it is used as an intermediate tool by different front-ends for the verification of Java, C or Ada programs. Yet, the choices made when designing the specification languages provided by those front-ends differ significantly, in particular with respect to the executability of specifications. We review these differences and the issues that result from these choices. We emphasize the specific feature of ghost code which turns out to be extremely useful for both static and dynamic verification. We also present techniques, combining static and dynamic features, that help users understand why static verification fails

    Ant: A Framework for Increasing the Efficiency of Sequential Debugging Techniques with Parallel Programs

    Get PDF
    Bugs in sequential programs cost the software industry billions of dollars in lost productivity each year. Even if simple parallel programming models are created, they will not reduce the level of sequential bugs in programs below that of sequential programs. It can be argued that the complexity of current parallel programming models may increase the number of sequential bugs in parallel programs because they distract the programmer from the core logic of the program. Tools exist that identify statements related to sequential bugs and allow those bugs to be more quickly located and fixed. Their use in parallel programs will continue to be useful. Many of these debugging tools require runtime monitoring of program points of interest in a program and the overhead of this monitoring is usually very high. We propose Ant, a framework that increases the efficiency of sequential debugging techniques when used with parallel programs. The Ant framework takes two different strategies depending on whether the program to be debugged is a distributed memory program or shared memory program. For MPI programs, the Ant compiler analyzes the program and identifies two different types of code regions: those that all processes execute and regions that only part of the processes execute. For shared memory Pthreads programs, Ant uses a combination of static and dynamic analyses to determine similar parts of the program executing in parallel and the number of threads executing those parts of the program. The programs are instrumented with calls to Ant runtime libraries and debugging libraries based on the Ant compiler\u27s static analysis results. Relative to a naive port of a debugging tool (C-DIDUCE, in our cases), Ant\u27s technique, by exploiting the application\u27s parallelism, reduces the monitoring overhead by up to 15.85 times (and on average 9.23 times) for MPI programs executing with 32 processes and up to 18.14 times (and on average 8.73 times) for Pthreads programs executing with 8 threads, while maintaining high accuracy

    Compiler-Aided Methodology for Low Overhead On-line Testing

    Get PDF
    Reliability is emerging as an important design criterion in modern systems due to increasing transient fault rates. Hardware fault-tolerance techniques, commonly used to address this, introduce high design costs. As alternative, software Signature-Monitoring (SM) schemes based on compiler assertions are an efficient method for control-flow-error detection. Existing SM techniques do not consider application-specific-information causing unnecessary overheads. In this paper, compile-time Control-Flow-Graph (CFG) topology analysis is used to place best-suited assertions at optimal locations of the assembly code to reduce overheads. Our evaluation with representative workloads shows fault-coverage increase with overheads close to Assertion- based Control-Flow Correction (ACFC), the method with lowest overhead. Compared to ACFC, our technique improves (on average) fault coverage by 17%, performance overhead by 5% and power-consumption by 3% with equal code-size overhead
    corecore