82,313 research outputs found

    Parallel and Distributed Simulation of Discrete Event Systems

    Get PDF
    The achievements attained in accelerating the simulation of the dynamics of complex discrete event systems using parallel or distributed multiprocessing environments are comprehensively presented. While parallel discrete event simulation (DES) governs the evolution of the system over simulated time in an iterative SIMD way, distributed DES tries to spatially decompose the event structure underlying the system, and executes event occurrences in spatial subregions by logical processes (LPs) usually assigned to different (physical) processing elements. Synchronization protocols are necessary in this approach to avoid timing inconsistencies and to guarantee the preservation of event causalities across LPs. Included in the survey are discussions on the sources and levels of parallelism, synchronous vs. asynchronous simulation and principles of LP simulation. In the context of conservative LP simulation (Chandy/Misra/Bryant) deadlock avoidance and deadlock detection/recovery strategies, Conservative Time Windows and the Carrier Nullmessage protocol are presented. Related to optimistic LP simulation (Time Warp), Optimistic Time Windows, memory management, GVT computation, probabilistic optimism control and adaptive schemes are investigated. (Also cross-referenced as UMIACS-TR-94-100

    Optimizing simulation on shared-memory platforms: The smart cities case

    Get PDF
    Modern advancements in computing architectures have been accompanied by new emergent paradigms to run Parallel Discrete Event Simulation models efficiently. Indeed, many new paradigms to effectively use the available underlying hardware have been proposed in the literature. Among these, the Share-Everything paradigm tackles massively-parallel shared-memory machines, in order to support speculative simulation by taking into account the limits and benefits related to this family of architectures. Previous results have shown how this paradigm outperforms traditional speculative strategies (such as data-separated Time Warp systems) whenever the granularity of executed events is small. In this paper, we show performance implications of this simulation-engine organization when the simulation models have a variable granularity. To this end, we have selected a traffic model, tailored for smart cities-oriented simulation. Our assessment illustrates the effects of the various tuning parameters related to the approach, opening to a higher understanding of this innovative paradigm

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    Autonomic State Management for Optimistic Simulation Platforms

    Get PDF
    We present the design and implementation of an autonomic state manager (ASM) tailored for integration within optimistic parallel discrete event simulation (PDES) environments based on the C programming language and the executable and linkable format (ELF), and developed for execution on x8664 architectures. With ASM, the state of any logical process (LP), namely the individual (concurrent) simulation unit being part of the simulation model, is allowed to be scattered on dynamically allocated memory chunks managed via standard API (e.g., malloc/free). Also, the application programmer is not required to provide any serialization/deserialization module in order to take a checkpoint of the LP state, or to restore it in case a causality error occurs during the optimistic run, or to provide indications on which portions of the state are updated by event processing, so to allow incremental checkpointing. All these tasks are handled by ASM in a fully transparent manner via (A) runtime identification (with chunk-level granularity) of the memory map associated with the LP state, and (B) runtime tracking of the memory updates occurring within chunks belonging to the dynamic memory map. The co-existence of the incremental and non-incremental log/restore modes is achieved via dual versions of the same application code, transparently generated by ASM via compile/link time facilities. Also, the dynamic selection of the best suited log/restore mode is actuated by ASM on the basis of an innovative modeling/optimization approach which takes into account stability of each operating mode with respect to variations of the model/environmental execution parameters

    Integrating Symbolic and Neural Processing in a Self-Organizing Architechture for Pattern Recognition and Prediction

    Full text link
    British Petroleum (89A-1204); Defense Advanced Research Projects Agency (N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225

    Thermodynamic Conditions in Quenching Chamber of Low Voltage Circuit Breaker

    Get PDF
    Práce se zabývá studiem procesů probíhajících při zhášení silnoproudého oblouku ve zhášecí komoře jističe. Je zaměřena na výpočet dynamiky tekutin a teplotního pole v okolí elektrického oblouku. V práci je dále popsán vliv vzdálenosti plechů v komoře a vliv tvarů plechů z hlediska aerodynamických podmínek uvnitř komory. Dalším cílem dosaženým touto prací je poskytnutí informací o vlivu polohy elektrického oblouku na termodynamické vlastnosti uvnitř komory. Toto je důležité, zejména pokud je oblouk do komory vtahován jinými silami, např. elektromagnetickými a během tohoto vtahovacího procesu mění svůj tvar i polohu. Za účelem co nejjednoduššího, ale zároveň co nejefektivnějšího řešení úkolu, byl vyvinut software určen speciálně pro výpočet dynamiky tekutin numerickou metodou konečných objemů (FVM). Tato metoda je, v porovnání s rozšířenější metodou konečných prvků (FEM), vhodnější pro výpočet dynamiky tekutin (CFD) zejména proto, že režie na výpočet jedné iterace jsou menší v porovnání s ostatními numerickými metodami. Další výhodou tohoto softwarového řešení je jeho modularita a rozšiřitelnost. Cely koncept softwaru je postaven na tzv. zásuvných modulech. Díky tomuto řešení můžeme využít výpočtové jádro pro další numerické analýzy, např. strukturální, elektromagnetickou apod. Jediná potřeba pro úspěšné používání těchto analýz je napsáni solveru pro konečné prvky (FEM). Jelikož je software koncipován jako multi–thread aplikace, využívá výkon současných vícejádrových procesorů naplno. Tato vlastnost se ještě více projeví, pokud se výpočet přesune z CPU na GPU. Jelikož současné grafické karty vyšších tříd mají několik desítek až stovek výpočetních jader a pracují s mnohem rychlejšími pamětmi, než CPU, je výpočetní výkon několikanásobně vyšší.Work deals with the study of processes that attend the electric arc extinction inside the quenching chamber of a circuit breaker. It is focused on several areas. The first one is concerned to fluid dynamics calculations (CFD) and the second one is aimed at thermal field calculations. In this work effects of metal plates distance together with metal plates shapes are described from aerodynamical point of view. Another objective solved by this work is to give information about influence of an electric arc position in a quenching chamber, which changed its shape due to forces acting on it during extinction process. For purpose of this work a new software solution for CFD was developed. Whole software concept is based on plug-ins. Due to this solution, the software§s calculation core can be used for other numerical analyses, like structural, electromagnetic, etc. The only requirement is to write a plug-in for these analyses. Because the software is designed as multi-threaded application, it can use the fully performance of current multi-core processors. Above mentioned property can be especially shown off, when a calculation is moved from CPU to GPU (Graphics Processing Units). Current high-end graphic cards have tens to hundreds cores and work with faster memories than CPU. Due to this fact, the simulation performance can raised manifold.

    A load-sharing architecture for high performance optimistic simulations on multi-core machines

    Get PDF
    In Parallel Discrete Event Simulation (PDES), the simulation model is partitioned into a set of distinct Logical Processes (LPs) which are allowed to concurrently execute simulation events. In this work we present an innovative approach to load-sharing on multi-core/multiprocessor machines, targeted at the optimistic PDES paradigm, where LPs are speculatively allowed to process simulation events with no preventive verification of causal consistency, and actual consistency violations (if any) are recovered via rollback techniques. In our approach, each simulation kernel instance, in charge of hosting and executing a specific set of LPs, runs a set of worker threads, which can be dynamically activated/deactivated on the basis of a distributed algorithm. The latter relies in turn on an analytical model that provides indications on how to reassign processor/core usage across the kernels in order to handle the simulation workload as efficiently as possible. We also present a real implementation of our load-sharing architecture within the ROme OpTimistic Simulator (ROOT-Sim), namely an open-source C-based simulation platform implemented according to the PDES paradigm and the optimistic synchronization approach. Experimental results for an assessment of the validity of our proposal are presented as well

    A fine-grain time-sharing Time Warp system

    Get PDF
    Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization protocol already allow for exploiting parallelism, several techniques have been proposed to further favor performance. Among them we can mention optimized approaches for state restore, as well as techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically targeted at reducing the incidence of causality errors leading to waste of computation. However, in state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform, which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt management module for Linux, which we release, together with the overall time-sharing support, within the open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and two real world models is presented, which shows how our proposal effectively leads to the reduction of the incidence of causality errors, as compared to traditional Time Warp, especially when running with higher degrees of parallelism
    corecore