169 research outputs found

    An Evolutionary Algorithm to Optimize Log/Restore Operations within Optimistic Simulation Platforms

    Get PDF
    In this work we address state recoverability in advanced optimistic simulation systems by proposing an evolutionary algorithm to optimize at run-time the parameters associated with state log/restore activities. Optimization takes place by adaptively selecting for each simulation object both (i) the best suited log mode (incremental vs non-incremental) and (ii) the corresponding optimal value of the log interval. Our performance optimization approach allows to indirectly cope with hidden effects (e.g., locality) as well as cross-object effects due to the variation of log/restore parameters for different simulation objects (e.g., rollback thrashing). Both of them are not captured by literature solutions based on analytical models of the overhead associated with log/restore tasks. More in detail, our evolutionary algorithm dynamically adjusts the log/restore parameters of distinct simulation objects as a whole, towards a well suited configuration. In such a way, we prevent negative effects on performance due to the biasing of the optimization towards individual simulation objects, which may cause reduced gains (or even decrease) in performance just due to the aforementioned hidden and/or cross-object phenomena. We also present an application-transparent implementation of the evolutionary algorithm within the ROme OpTimistic Simulator (ROOT-Sim), namely an open source, general purpose simulation environment designed according to the optimistic synchronization paradigm

    A load-sharing architecture for high performance optimistic simulations on multi-core machines

    Get PDF
    In Parallel Discrete Event Simulation (PDES), the simulation model is partitioned into a set of distinct Logical Processes (LPs) which are allowed to concurrently execute simulation events. In this work we present an innovative approach to load-sharing on multi-core/multiprocessor machines, targeted at the optimistic PDES paradigm, where LPs are speculatively allowed to process simulation events with no preventive verification of causal consistency, and actual consistency violations (if any) are recovered via rollback techniques. In our approach, each simulation kernel instance, in charge of hosting and executing a specific set of LPs, runs a set of worker threads, which can be dynamically activated/deactivated on the basis of a distributed algorithm. The latter relies in turn on an analytical model that provides indications on how to reassign processor/core usage across the kernels in order to handle the simulation workload as efficiently as possible. We also present a real implementation of our load-sharing architecture within the ROme OpTimistic Simulator (ROOT-Sim), namely an open-source C-based simulation platform implemented according to the PDES paradigm and the optimistic synchronization approach. Experimental results for an assessment of the validity of our proposal are presented as well

    ORCHESTRA: an asyncrhonous non-blocking distributed GVT algorithm

    Get PDF
    Taking advantage of high computing capabilities of modern distributed architectures is fundamental to run large-scale simulation models based on the Parallel Discrete Event Simulation (PDES) paradigm. In particular, by exploiting clusters of modern multi-core architectures it is possible to efficiently overcome both the power and the memory wall. This is more the case when relying on the speculative Time Warp simulation protocol. Nevertheless, to ensure the correctness of the simulation, a form of coordination such as the GVT is fundamental. To increase the scalability of this mandatory synchronization, we present in this paper a coordination algorithm for clusters of share-everything multi-core simulation platoforms which is both wait-free and asynchronous. The nature of this protocol allows any computing node to carry on simulation activities while the global agreement is reached

    Optimizing memory management for optimistic simulation with reinforcement learning

    Get PDF
    Simulation is a powerful technique to explore complex scenarios and analyze systems related to a wide range of disciplines. To allow for an efficient exploitation of the available computing power, speculative Time Warp-based Parallel Discrete Event Simulation is universally recognized as a viable solution. In this context, the rollback operation is a fundamental building block to support a correct execution even when causality inconsistencies are a posteriori materialized. If this operation is supported via checkpoint/restore strategies, memory management plays a fundamental role to ensure high performance of the simulation run. With few exceptions, adaptive protocols targeting memory management for Time Warp-based simulations have been mostly based on a pre-defined analytic models of the system, expressed as a closed-form functions that map system's state to control parameters. The underlying assumption is that the model itself is optimal. In this paper, we present an approach that exploits reinforcement learning techniques. Rather than assuming an optimal control strategy, we seek to find the optimal strategy through parameter exploration. A value function that captures the history of system feedback is used, and no a-priori knowledge of the system is required. An experimental assessment of the viability of our proposal is also provided for a mobile cellular system simulation

    Adaptive techniques for scalable optimistic parallel discrete event simulation

    Get PDF
    Discrete Event Simulation (DES) can be an important tool across various domains such as Engineering, Military, Biology, High Performance Computing, and many others. Interacting systems in these domains can be simulated with a high degree of fidelity and accuracy. Furthermore, DES simulations do not rely on a global time step and simulated entities are only updated at discrete points in virtual time at which events occur. The particular DES simulation engine handles simulation logic and event scheduling, while the particular models written by domain experts need only focus on model-specific logic. As models grow in size and complexity, running simulations in parallel becomes an attractive option. However, a number of issues need to be addressed in order to effectively run DES simulations in parallel in a distributed environment. The issue of how to synchronize PDES simulations has been addressed in a number of ways, using various types of either conservative or optimistic protocols. Optimistic simulation synchronization has shown several benefits over conservative synchronization, but it is also more complex and brings with it some unique challenges. Two of these challenges are synchronizing event execution across distributed processes, and maintaining a high accuracy in the speculative execution of events. This thesis aims to address these challenges in order to make optimistic simulations even more effective and reliable. Specifically, this thesis explores a variety of GVT algorithms in an attempt to lower synchronization costs, while utilizing other techniques such as dynamic load balancing to maintain a high event execution efficiency and keep work balanced across execution units. Most importantly, these techniques aim to make the simulator robust and adaptive, allowing it to work effectively for a variety of models with different characteristics and irregularities

    Parallel algorithms for simulating continuous time Markov chains

    Get PDF
    We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors

    POSE: getting over grainsize in parallel discrete event simulation

    Full text link
    Parallel discrete event simulations (PDES) encom-pass a broad range of analytical simulations. Their utility lies in their ability to model a system and pro-vide information about its behavior in a timely manner. Current PDES methods provide limited performance im-provements over sequential simulation. Many logical models for applications have fine granularity making them challenging to parallelize. In POSE, we exam-ine the overhead required for optimistically synchroniz-ing events. We have designed an object model based on the concept of virtualization and new adaptive op-timistic methods to improve the performance of fine-grained PDES applications. These novel approaches exploit the speculative nature of optimistic protocols to improve single-processor parallel over sequential per-formance and achieve scalability for previously hard-to-parallelize fine-grained simulations.1 1

    Experiments in distributed memory time warp

    Get PDF

    Area virtual time

    Get PDF
    corecore