40 research outputs found

    Automatic Generation of Schedulings for Improving the Test Coverage of Systems-on-a-Chip

    Get PDF
    International audienceSystemC is becoming a de-facto standard for the early simulation of Systems-on-a-chip (SoCs). It is a parallel language with a scheduler. Testing a SoC written in SystemC implies that we execute it, for some well chosen data. We are bound to use a particular deterministic implementation of the scheduler, whose specification is non-deterministic. Consequently, we may fail to discover bugs that would have appeared using another valid implementation of the scheduler. Current methods for testings SoCs concentrate on the generation of the inputs, and do not address this problem at all. We assume that the selection of relevant data is already done, and we generate several schedulings allowed by the scheduler specification. We use dynamic partial-order reduction techniques to avoid the generation of two schedulings that have the same effect on the system's behavior. Exploring alternative schedulings during testing is a way of guaranteeing that the SoC description, and in particular the embedded software, is scheduler-independent, hence more robust. The technique extends to the exploration of other non-fully specified aspects of SoC descriptions, like timing

    The synchronous languages 12 years later

    Full text link

    Dynamic Assertion-Based Verification for SystemC

    Get PDF
    SystemC has emerged as a de facto standard modeling language for hardware and embedded systems. However, the current standard does not provide support for temporal specifications. Specifically, SystemC lacks a mechanism for sampling the state of the model at different types of temporal resolutions, for observing the internal state of modules, and for integrating monitors efficiently into the model's execution. This work presents a novel framework for specifying and efficiently monitoring temporal assertions of SystemC models that removes these restrictions. This work introduces new specification language primitives that (1) expose the inner state of the SystemC kernel in a principled way, (2) allow for very fine control over the temporal resolution, and (3) allow sampling at arbitrary locations in the user code. An efficient modular monitoring framework presented here allows the integration of monitors into the execution of the model, while at the same time incurring low overhead and allowing for easy adoption. Instrumentation of the user code is automated using Aspect-Oriented Programming techniques, thereby allowing the integration of user-code-level sample points into the monitoring framework. While most related approaches optimize the size of the monitors, this work focuses on minimizing the runtime overhead of the monitors. Different encoding configurations are identified and evaluated empirically using monitors synthesized from a large benchmark of random and pattern temporal specifications. The framework and approaches described in this dissertation allow the adoption of assertion-based verification for SystemC models written using various levels of abstraction, from system level to register-transfer level. An advantage of this work is that many existing specification languages call be adopted to use the specification primitives described here, and the framework can easily be integrated into existing implementations of SystemC

    Software testing: test suite compilation and execution optimizations

    Get PDF
    The requirements and responsibilities assumed by software have increasingly rendered it to be large and complex. Testing to ensure that software meets all its requirements and is free from failures is a difficult and time-consuming task that necessitates the use of large test suites, containing many test cases. Time needed to compile and execute large test suites has become prohibitive. Current optimization techniques aim to reduce the test suite size by removing redundant test cases. However, as systems become larger, the number of essential test cases is still very large and affects the software life-cycle. In this thesis, we explore techniques for reducing the compilation and the execution time of test suites without removing any test cases or changing computing infrastructure. All of our proposed techniques can be used in conjunction with existing test suite optimisations. 1. For test suite compilation, we propose a data transformation that reduces the number of instructions in the test code, which in turn reduces compilation time. Using two well known compilers, GCC and Clang, we conduct empirical evaluations using subject programs from industry standard benchmarks and an industry provided program. We evaluate compilation speedup, execution time, scalability and correctness of the proposed test code transformation. 2. For test suite execution, we propose a novel approach to improve instruction locality across test case executions. Our approach measures the distance between test case executions (number of different instructions). We then schedule the test cases for execution so that the distance between neighboring test cases is minimised. We empirically evaluate our approach with 20 subject programs and test suites from the SIR repository, EEMBC suite and LLVM Symbolizer to compare execution times and cache misses with test case orderings using our approach versus a traditional ordering maximising coverage and random permutations. We also assess overhead of algorithms in generating orderings that optimise instruction cache locality. 3. In our final contribution, we target execution time of heterogeneous test suites and assess the effect of device-based test case scheduling. We propose a test case scheduling algorithm which improves the load balancing between multiple devices of a heterogeneous system in an attempt to reduce the overall test suite execution time. We conduct empirical evaluation on a large-scaled, industrial test suite targeting implementations of the SYCL standard which has been developed by Codeplay Software. The outcome of our research can be summarized as follows: 1. Our data transformation approach resulted in significant compilation speedups in the range of 1.3×to 69×. Our experiments show that the gains in compilation time allow significantly more test cases to be included in test suites, improving scalability of test code compilation. 2. Our instruction-based test case scheduling algorithms were able to achieve a maximum execution speedup of 29.48%. Performance gains were considerable for programs and test suites where the average number of different instructions executed between test cases was high. 3. Finally, we found that a maximum of 25.42% speed-up is achieved by our device based test scheduling algorithm when compared to parallel test case execution of a heterogeneous test suite without test scheduling. Our proposed techniques are able to significantly reduce the compilation as well as the execution time of test suites without eliminating any test cases or upgrading computing infrastructure. Our data transformation results in faster test code compilation while our test case scheduling algorithms achieve significant speed-ups for programs executing on single-CPU, multi-CPU as well as heterogeneous architectures. As systems get more complex, they require frequent and extensive testing. Our techniques provide safe and efficient means of compiling and executing test suites which, in combination with existing test suite optimisations, can significantly reduce the cost of software testing

    Combining Model Checking and Testing

    Get PDF
    Abstract Model checking and testing have a lot in common. Over the last two decades, significant progress has been made on how to broaden the scope of model checking from finite-state abstractions to actual software implementations. One way to do this consists of adapting model checking into a form of systematic testing that is applicable to industrial-size software. This chapter presents an overview of this strand of software model checking

    Analysis and Approximation of Optimal Co-Scheduling on CMP

    Get PDF
    In recent years, the increasing design complexity and the problems of power and heat dissipation have caused a shift in processor technology to favor Chip Multiprocessors. In Chip Multiprocessors (CMP) architecture, it is common that multiple cores share some on-chip cache. The sharing may cause cache thrashing and contention among co-running jobs. Job co-scheduling is an approach to tackling the problem by assigning jobs to cores appropriately so that the contention and consequent performance degradations are minimized. This dissertation aims to tackle two of the most prominent challenges in job co-scheduling.;The first challenge is in the computational complexity for determining optimal job co-schedules. This dissertation presents one of the first systematic analyses on the complexity of job co-scheduling. Besides proving the NP completeness of job co-scheduling, it introduces a set of algorithms, based on graph theory and Integer/Linear Programming, for computing optimal co-schedules or their lower bounds in scenarios with or without job migrations. For complex cases, it empirically demonstrates the feasibility for approximating the optimal schedules effectively by proposing several heuristics-based algorithms. These discoveries facilitate the assessment of job co-schedulers by providing necessary baselines, and shed insights to the development of practical co-scheduling systems.;The second challenge resides in the prediction of the performance of processes co-running on a shared cache. This dissertation explores the influence on co-run performance prediction imposed by co-runners, program inputs, and cache configurations. Through a sequence of formal analysis, we derive an analytical co-run locality model, uncovering the inherent statistical connections between the data references of programs single-runs and their co-run locality. The model offers theoretical insights on co-run locality analysis and leads to a lightweight approach for fast prediction of shared cache performance. We demonstrate the effectiveness of the model in enabling proactive job co-scheduling.;Together, the two-dimensional findings open up many new opportunities for cache management on modern CMP by laying the foundation for job co-scheduling, and enhancing the understanding to data locality and cache sharing significantly

    Reducing the Complexity of Heterogeneous Computing: A Unified Approach for Application Development and Runtime Optimization

    Get PDF
    Heterogeneous systems with accelerators promise considerable performance improvements at a lower cost than homogeneous CPU-only systems. However, to benefit from this potential, considerable work is required from developers to integrate them efficiently in an application. This work contributes a new framework implemented with an online-learning runtime system that simplifies development and makes applications more portable, efficient and reliable across different systems

    Model Checking and Model-Based Testing : Improving Their Feasibility by Lazy Techniques, Parallelization, and Other Optimizations

    Get PDF
    This thesis focuses on the lightweight formal method of model-based testing for checking safety properties, and derives a new and more feasible approach. For liveness properties, dynamic testing is impossible, so feasibility is increased by specializing on an important class of properties, livelock freedom, and deriving a more feasible model checking algorithm for it. All mentioned improvements are substantiated by experiments
    corecore