431 research outputs found

    A BMC-Formulation for the Scheduling Problem in Highly Constrained Hardware Systems

    Get PDF
    Abstract This paper describes a novel application for SAT-based Bounded Model Checking (BMC) within hardware scheduling problems. First of all, it introduces a new model for control-dependent systems. In this model, alternative executions (producing "tree-like" scheduling traces) are managed as concurrent systems, where alternative behaviors are followed in parallel. This enables standard BMC techniques, producing solutions made up of single paths connecting initial and terminal states. Secondly, it discusses the main problem arising from the above choice, i.e., rewriting resource bounds, so that they take into account the artificial concurrencies introduced for controlled behaviors. Thirdly, we exploit SAT-based Bounded Model Checking as a verification technique mostly oriented to bug hunting and counter-example extraction. In order to consider resource constraints, the solutions of modifying the SAT solver or adding extra clauses are both taken into consideration. Preliminary experimental results, comparing our SAT based approach to state-of-the art BDD-based techniques are eventually presented

    Scheduling Optimisations for SPIN to Minimise Buffer Requirements in Synchronous Data Flow

    Get PDF
    Synchronous Data flow (SDF) graphs have a simple and elegant semantics (essentially linear algebra) which makes SDF graphs eminently suitable as a vehicle for studying scheduling optimisations. We extend related work on using SPIN to experiment with scheduling optimisations aimed at minimising buffer requirements.We show that for a benchmark of commonly used case studies the performance of our SPIN based scheduler is comparable to that of state of the art research tools. The key to success is using the semantics of SDF to prove when using (even unsound and/or incomplete) optimisations are justified. The main benefit of our approach lies in gaining deep insight in the optimisations at relatively low cost

    Formal Analysis and Verification of Max-Plus Linear Systems

    Full text link
    Max-Plus Linear (MPL) systems are an algebraic formalism with practical applications in transportation networks, manufacturing and biological systems. In this paper, we investigate the problem of automatically analyzing the properties of MPL, taking into account both structural properties such as transient and cyclicity, and the open problem of user-defined temporal properties. We propose Time-Difference LTL (TDLTL), a logic that encompasses the delays between the discrete time events governed by an MPL system, and characterize the problem of model checking TDLTL over MPL. We first consider a framework based on the verification of infinite-state transition systems, and propose an approach based on an encoding into model checking. Then, we leverage the specific features of MPL systems to devise a highly optimized, combinational approach based on Satisfiability Modulo Theory (SMT). We experimentally evaluate the features of the proposed approaches on a large set of benchmarks. The results show that the proposed approach substantially outperforms the state of the art competitors in expressiveness and effectiveness, and demonstrate the superiority of the combinational approach over the reduction to model checking.Comment: 28 pages (including appendixes

    Prediction of Parallel Speed-ups for Las Vegas Algorithms

    Get PDF
    International audienceWe propose a probabilistic model for the parallel execution of Las Vegas algorithms, i.e. randomized algorithms whose runtime might vary from one execution to another, even with the same input. This model aims at predicting the parallel performances (i.e. speedups) by analysis the runtime distribution of the sequential runs of the algorithm. Then, we study in practice the case of a particular Las Vegas algorithm for combinatorial optimization on three classical problems, and compare the model with an actual parallel implementation up to 256 cores. We show that the prediction can be accurate, matching the actual speedups very well up to 100 parallel cores and then with a deviation of about 20% up to 256 cores

    Automated test generation for software components

    Get PDF
    This report presents a method developed as a part of the LIME-project (Lightweight formal Methods for distributed component-based Embedded systems) for generating test cases for software components. The main technique employed in this work is dynamic symbolic execution, where a program is executed both concretely and symbolically at the same time. During an execution a set of symbolic constraints is collected describing the input values that will force the program to follow an unexplored execution path. By solving the collected constraints new input values are obtained allowing each test run to exercise different behaviour of the program. Based on the methods developed in this work, a tool has been implemented that can be used for automated testing of sequential Java programs

    Specifications and programs for computer software validation

    Get PDF
    Three software products developed during the study are reported and include: (1) FORTRAN Automatic Code Evaluation System, (2) the Specification Language System, and (3) the Array Index Validation System

    Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines

    Get PDF
    Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.United States. Dept. of Energy (Award DE-SC0005288)National Science Foundation (U.S.) (Grant 0964004)Intel CorporationCognex CorporationAdobe System
    • 

    corecore