11 research outputs found
Time-Sharing Time Warp via Lightweight Operating System Support
The order according to which the different tasks are carried out within a Time Warp platform has a direct impact on performance, given that event processing is speculative, thus being subject to the possibility of being rolled-back. It is typically recognized that not-yet-executed events having lower timestamps should be given higher CPU-schedule priority, since this contributes to keep low the amount of rollbacks. However, common Time Warp platforms usually execute events as atomic actions. Hence control is bounced back to the underlying simulation platform only at the end of the current event processing routine. In other words, CPU-scheduling of events resembles classical batch-multitasking scheduling, which is recognized not to promptly react to variations of the priority of pending tasks (e.g. associated with the injection of new events in the system). In this article we present the design and implementation of a time-sharing Time Warp platform, to be run on multi-core machines, where the platform-level software is allowed to take back control on a periodical basis (with fine grain period), and to possibly preempt any ongoing event processing activity in favor of dispatching (along the same thread) any other event that is revealed to have higher priority. Our proposal is based on an ad-hoc kernel module for Linux, which implements a fine grain timer-interrupt mechanism with lightweight management, which is fully integrated with the modern top/bottom-half timer-interrupt Linux architecture, and which does not induce any bias in terms of relative CPU-usage planning across Time Warp vs non-Time Warp threads running on the machine. Our time-sharing architecture has been integrated within the open source ROOT-Sim optimistic simulation package, and we also report some experimental data for an assessment of our proposal
Transparent multi-core speculative parallelization of DES models with event and cross-state dependencies
In this article we tackle transparent parallelization of Discrete Event Simulation (DES) models to be run on top of multi-core machines according to speculative schemes. The innovation in our proposal lies in that we consider a more general programming and execution model, compared to the one targeted by state of the art PDES platforms, where the boundaries of the state portion accessible while processing an event at a specific simulation object do not limit access to the actual object state, or to shared global variables. Rather, the simulation object is allowed to access (and alter) the state of any other object, thus causing what we term cross-state dependency. We note that this model exactly complies with typical (easy to manage) sequential-style DES programming, where a (dynamically-allocated) state portion of object A can be accessed by object B in either read or write mode (or both) by, e.g., passing a pointer to B as the payload of a scheduled simulation event. However, while read/write memory accesses performed in the sequential run are always guaranteed to observe (and to give rise to) a consistent snapshot of the state of the simulation model, consistency is not automatically guaranteed in case of parallelization and concurrent execution of simulation objects with cross-state dependencies. We cope with such a consistency issue, and its application-transparent support, in the context of parallel and optimistic executions. This is achieved by introducing an advanced memory management architecture, able to efficiently detect read/write accesses by concurrent objects to whichever object state in an application transparent manner, together with advanced synchronization mechanisms providing the advantage of exploiting parallelism in the underlying multi-core architecture while transparently handling both cross-state and traditional event-based dependencies. Our proposal targets Linux and has been integrated with the ROOT-Sim open source optimistic simulation platform, although its design principles, and most parts of the developed software, are of general relevance. Copyright 2014 ACM
A fine-grain time-sharing Time Warp system
Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization
protocol already allow for exploiting parallelism, several techniques have been proposed to
further favor performance. Among them we can mention optimized approaches for state restore, as well as
techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically
targeted at reducing the incidence of causality errors leading to waste of computation. However, in
state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility
to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing
of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present
the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux
machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to
higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform,
which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU
reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt
management module for Linux, which we release, together with the overall time-sharing support, within the
open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and
two real world models is presented, which shows how our proposal effectively leads to the reduction of the
incidence of causality errors, as compared to traditional Time Warp, especially when running with higher
degrees of parallelism
Techniques for Transparent Parallelization of Discrete Event Simulation Models
Simulation is a powerful technique to represent the evolution of real-world phenomena
or systems over time. It has been extensively used in different research
fields (from medicine to biology, to economy, and to disaster rescue) to study
the behaviour of complex systems during their evolution (symbiotic simulation)
or before their actual realization (what-if analysis).
A traditional way to achieve high performance simulations is the employment
of Parallel Discrete Event Simulation (PDES) techniques, which are based
on the partitioning of the simulation model into Logical Processes (LPs) that
can execute events in parallel on different CPUs and/or different CPU cores,
and rely on synchronization mechanisms to achieve causally consistent execution
of simulation events. As it is well recognized, the optimistic synchronization
approach, namely the Time Warp protocol, which is based on rollback for recovering
possible timestamp-order violations due to the absence of block-until-safe
policies for event processing, is likely to favour speedup in general application/
architectural contexts.
However, the optimistic PDES paradigm implicitly relies on a programming
model that shifts from traditional sequential-style programming, given
that there is no notion of global address space (fully accessible while processing
events at any LP). Furthermore, there is the underlying assumption that the
code associated with event handlers cannot execute unrecoverable operations
given their speculative processing nature. Nevertheless, even though no unrecoverable
action is ever executed by event handlers, a means to actually undo
the action if requested needs to be devised and implemented within the software
stack.
On the other hand, sequential-style programming is an easy paradigm for
the development of simulation code, given that it does not require the programmer
to reason about memory partitioning (and therefore message passing) and
speculative (concurrent) processing of the application.
In this thesis, we present methodological and technical innovations which
will show how it is possible, by developing innovative runtime mechanisms, to
allow a programmer to implement its simulation model in a fully sequential way,
and have the underlying simulation framework to execute it in parallel according
to speculative processing techniques. Some of the approaches we provide show
applicability in either shared- or distributed-memory systems, while others will
be specifically tailored to multi/many-core architectures.
We will clearly show, during the development of these supports, what is the
effect on performance of these solutions, which will nevertheless be negligible,
allowing a fruitful exploitation of the available computing power. In the end,
we will highlight which are the clear benefits on the programming model tha
Software Supports for Event Preemptive Rollback in Optimistic Parallel Simulation on Myrinet Clusters
Optimistic synchronization protocols for parallel discrete event simulation employ rollback techniques to ensure causally consistent execution of simulation events. Although event preemptive rollback (i.e. rollback based on timely event execution interruption upon the arrival of a message revealing a causality inconsistency) is recognized as an approach for increasing the performance and tackling run-time anomalies of this type of synchronization, the lack of adequate functionalities at the level of general purpose communication layers typically prevents any e#ective implementation of event preemptive rollback operations