315,585 research outputs found

    GPU Based Acceleration of SystemC and Transaction Level Models for MPSOC Simulation

    Get PDF
    With increasing number of cores on a chip, the complexity of modeling hardware using virtual prototype is increasing rapidly. Typical SOCs today have multipro-cessors connected through a bus or NOC architecture which can be modeled using SystemC framework. SystemC is a popular language used for early design exploration and performance analysis of complex embedded systems. TLM2.0, an extension of SystemC, is increasingly used in MPSOC designs for simulating loosely and approxi-mately timed transaction level models. The OSCI reference kernel which implements SystemC library runs on a single thread, slowing up the simulation speed to a large extent. Previous works have used the computational power of multi-core systems and GPUs which can run multiple threads simultaneously, speeding up the simu-lation. Multi-core simulations are not as eïŹ€ective in cases where thread runtime is low, because synchronization overhead becomes comparable to thread runtime. Modern GPUs can run thousands of threads at a time and have shown good results for synthesizable designs in recent eïŹ€orts. However, development in these works are limited to synthesizable subsets of SystemC models, not supporting timed events for process communication. In this research work, a methodology is proposed for accelerating timed event based SystemC TLM2.0 model to GPU based kernel, which maps SystemC processes to CUDA threads in GPU, providing high data level par-allelism. This work aims to provide a scalable solution for simulating large MPSOC designs, facilitating early design exploration and performance analysis. Experiments have shown that the proposed technique provides a speed-up of the order of 100x for typical MPSOC designs

    Performanzanalyse fĂŒr Multi-Core Multi-Mode Systeme mit gemeinsam genutzten Ressourcen - Verfahren und Anwendung auf AUTOSAR -

    Get PDF
    In order to implement multi-core systems for single-mode and multi-mode real-time applications, as can be found in modern automobiles, their development process requires appropriate methods and tools for timing and performance verification. In this context, this thesis proposes first novel approaches for the analysis of worst-case blocking-times and response-times for single-mode real-time applications that share resources in partitioned multi-core systems. For this purpose a compositional performance analysis methodology is adopted and extended to take into account the contention of tasks on the processor cores and on the shared resources under different combinations of processor scheduling policies and shared resource arbitration strategies. Highly relevant is the compatibility of the proposed analysis methods with the specifications of the automotive AUTOSAR standard, which defines the combination of (1) preemptive, non-preemptive and cooperative core local scheduling with (2) lock-based arbitration of core local shared resources and spinlock-based arbitration of inter-core shared resources. Further, this thesis proposes novel timing analysis solutions for multi-mode distributed real-time systems. For such systems, the settling time of a mode change, called mode change transition latency, is identified as an important system parameter that has been neglected before. This thesis contributes a novel analysis algorithm which gives a maximum bound on each mode change transition latency of multi-mode distributed applications. Knowing the settling time of each mode change, the impact of multiple mode changes and of the possible overload situations can be handled in the early development phases of real-time systems. Finally, an approach for safely handling shared resources across mode changes is presented and a corresponding timing analysis method is contributed. The new analysis solution combines modeling and analysis elements of the multi-core and multi-mode related analysis solutions and focuses on the specification of the AUTOSAR standard. This enables system designers to handle the timing behavior of more complex systems in which the problems of mode management, multi-core scheduling and shared resource arbitration coexist. The applicability and usefulness of the contributed analysis solutions are highlighted by experimental evaluations, which are enabled by the implementation of the proposed analysis methods in a performance analysis tool framework.Um Multicore-Systeme fĂŒr die Umsetzung zeitkritischer Single- und Multi-Mode Anwendungen in sicherheitskritischen Umgebungen einsetzen zu können, werden in dem Entwicklungsprozess geeignete Analysemethoden und Tools zur Bestimmung des Zeitverhaltens und der Performanz benötigt. Als erster Beitrag dieser Dissertation werden neue Analyseverfahren eingefĂŒhrt, um die Worst-Case-Antwortzeiten und -Blockierungszeiten fĂŒr statische Echtzeitanwendungen in Single-Mode eingebetteten Multicore-Systemen mit gemeinsam genutzten Ressourcen zu bestimmen. Die entwickelten Verfahren nutzen einen existierenden kompositionellen Performanzanalyseansatz und erweitern diesen, um verschiedene Kombinationen von partitionierenden Multiprozessor-Schedulingverfahren und –Synchronisationsmechanismen behandeln zu können. Besonders praxisrelevant ist die Möglichkeit, die Kombination von (1) preemptives, nicht-preemptives sowie kooperatives Prozessor-Scheduling und (2) Spinlock-basierten Synchronisationsmechanismen zu analysieren, die heute in AUTOSAR-konformen Automotive-Softwarearchitekturen standardisiert sind. Als zweiter Beitrag wird in dieser Dissertation ein neuer Ansatz fĂŒr die Analyse der zeitlichen Auswirkungen von mehreren SzenarienĂŒbergĂ€ngen in vernetzten Multi-Mode eingebetteten Systemen eingefĂŒhrt. Als erste konstruktive Maßnahme ermöglicht das in dieser Arbeit prĂ€sentierte Verfahren die Berechnung der Einschwingzeit jedes SzenarioĂŒbergangs und leistet dadurch eine wichtige Hilfestellung beim Systementwurf. Auf diese Weise können die Auswirkungen der SzenarienĂŒbergĂ€nge, einschließlich der zeitlich begrenzten Überlastsituationen, kontrolliert und in den Systementwurf frĂŒhzeitig einbezogen werden. Als letzter Beitrag dieser Dissertation wird ein Ansatz fĂŒr die Handhabung der Zugriffskonflikte auf gemeinsam genutzten Ressourcen in Multi-Mode eingebetteten Multicore-Systemen prĂ€sentiert und eine entsprechende Analysemethode eingefĂŒhrt. Die neue Analyse kombiniert Modellierungs- und Analyse-Elemente der vorher in dieser Arbeit eingefĂŒhrten AnalyseansĂ€tze, und ermöglicht die Untersuchung des ungĂŒnstigsten Zeitverhaltens viel komplexer eingebetteten Multicore-Systemen. Dabei werden erneut Spezifikationen der AUTOSAR-Standards berĂŒcksichtigt. Nicht zuletzt werden alle Analysemethoden in eine Toolumgebung implementiert und fĂŒr verschiedene Experimente, die deren praktische Anwendbarkeit hervorheben, angewendet

    Formal and Informal Methods for Multi-Core Design Space Exploration

    Full text link
    We propose a tool-supported methodology for design-space exploration for embedded systems. It provides means to define high-level models of applications and multi-processor architectures and evaluate the performance of different deployment (mapping, scheduling) strategies while taking uncertainty into account. We argue that this extension of the scope of formal verification is important for the viability of the domain.Comment: In Proceedings QAPL 2014, arXiv:1406.156

    Interval simulation: raising the level of abstraction in architectural simulation

    Get PDF
    Detailed architectural simulators suffer from a long development cycle and extremely long evaluation times. This longstanding problem is further exacerbated in the multi-core processor era. Existing solutions address the simulation problem by either sampling the simulated instruction stream or by mapping the simulation models on FPGAs; these approaches achieve substantial simulation speedups while simulating performance in a cycle-accurate manner This paper proposes interval simulation which rakes a completely different approach: interval simulation raises the level of abstraction and replaces the core-level cycle-accurate simulation model by a mechanistic analytical model. The analytical model estimates core-level performance by analyzing intervals, or the timing between two miss events (branch mispredictions and TLB/cache misses); the miss events are determined through simulation of the memory hierarchy, cache coherence protocol, interconnection network and branch predictor By raising the level of abstraction, interval simulation reduces both development time and evaluation time. Our experimental results using the SPEC CPU2000 and PARSEC benchmark suites and the MS multi-core simulator show good accuracy up to eight cores (average error of 4.6% and max error of 11% for the multi-threaded full-system workloads), while achieving a one order of magnitude simulation speedup compared to cycle-accurate simulation. Moreover interval simulation is easy to implement: our implementation of the mechanistic analytical model incurs only one thousand lines of code. Its high accuracy, fast simulation speed and ease-of-use make interval simulation a useful complement to the architect's toolbox for exploring system-level and high-level micro-architecture trade-offs

    Power aware early design stage hardware software co-optimization

    Get PDF
    Co-optimizing hardware and software can lead to substantial performance and energy benefits, and is becoming an increasingly important design paradigm. In scientific computing, power constraints increasingly necessitate the return to specialized chips such as Intel’s MIC or IBM’s Blue-Gene architectures. To enable hardware/software co-design in early stages of the design cycle, we propose a simulation infrastructure methodology by combining high-abstraction performance simulation using Sniper with power modeling using McPAT and custom DRAM power models. Sniper/McPAT is fast — simulation speed is around 2 MIPS on an 8-core host machine — because it uses analytical modeling to abstract away core performance during multi-core simulation. We demonstrate Sniper/McPAT’s accuracy through validation against real hardware; we report average performance and power prediction errors of 22.1% and 8.3%, respectively, for a set of SPEComp benchmarks

    RPPM : Rapid Performance Prediction of Multithreaded workloads on multicore processors

    Get PDF
    Analytical performance modeling is a useful complement to detailed cycle-level simulation to quickly explore the design space in an early design stage. Mechanistic analytical modeling is particularly interesting as it provides deep insight and does not require expensive offline profiling as empirical modeling. Previous work in mechanistic analytical modeling, unfortunately, is limited to single-threaded applications running on single-core processors. This work proposes RPPM, a mechanistic analytical performance model for multi-threaded applications on multicore hardware. RPPM collects microarchitecture-independent characteristics of a multi-threaded workload to predict performance on a previously unseen multicore architecture. The profile needs to be collected only once to predict a range of processor architectures. We evaluate RPPM's accuracy against simulation and report a performance prediction error of 11.2% on average (23% max). We demonstrate RPPM's usefulness for conducting design space exploration experiments as well as for analyzing parallel application performance

    ASCR/HEP Exascale Requirements Review Report

    Full text link
    This draft report summarizes and details the findings, results, and recommendations derived from the ASCR/HEP Exascale Requirements Review meeting held in June, 2015. The main conclusions are as follows. 1) Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude -- and in some cases greater -- than that available currently. 2) The growth rate of data produced by simulations is overwhelming the current ability, of both facilities and researchers, to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. 3) Data rates and volumes from HEP experimental facilities are also straining the ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. 4) A close integration of HPC simulation and data analysis will aid greatly in interpreting results from HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. 5) Long-range planning between HEP and ASCR will be required to meet HEP's research needs. To best use ASCR HPC resources the experimental HEP program needs a) an established long-term plan for access to ASCR computational and data resources, b) an ability to map workflows onto HPC resources, c) the ability for ASCR facilities to accommodate workflows run by collaborations that can have thousands of individual members, d) to transition codes to the next-generation HPC platforms that will be available at ASCR facilities, e) to build up and train a workforce capable of developing and using simulations and analysis to support HEP scientific research on next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio

    Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Get PDF
    In recent technology nodes, reliability is considered a part of the standard design Âżow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approac
    • 

    corecore