66,135 research outputs found

    Going through Rough Times: from Non-Equilibrium Surface Growth to Algorithmic Scalability

    Full text link
    Efficient and faithful parallel simulation of large asynchronous systems is a challenging computational problem. It requires using the concept of local simulated times and a synchronization scheme. We study the scalability of massively parallel algorithms for discrete-event simulations which employ conservative synchronization to enforce causality. We do this by looking at the simulated time horizon as a complex evolving system, and we identify its universal characteristics. We find that the time horizon for the conservative parallel discrete-event simulation scheme exhibits Kardar-Parisi-Zhang-like kinetic roughening. This implies that the algorithm is asymptotically scalable in the sense that the average progress rate of the simulation approaches a non-zero constant. It also implies, however, that there are diverging memory requirements associated with such schemes.Comment: to appear in the Proceedings of the MRS, Fall 200

    The treatment of time in distributed simulation

    Get PDF
    Simulation is one of the most important tools to analyse, design, and operate complex processes and systems. Simulation allows us to make a 'trial and error' in order to understand a system and describe a problem. Therefore, it is of great interest to use simulation easily and practically. The advent of parallel processors and languages help simulation studies. A recent simulation trend is distributed simulation which may be called discrete- event simulation, because distributed simulation has a great potential for the speed-up. This thesis will survey discrete-event simulation and examine one particular algorithm. It will first survey simulation in general and secondly, distributed simulation. Distributed simulation has broadly two mechanisms: conservative and optimistic. The treatment of time in these mechanisms is different, we will look into both mechanisms. Finally, we will examine the conservative mechanism on a network of transputers using Occam. We will conclude with the result of the experiments and the perspective of distributed simulation

    A New Mathematical Model for Optimizing the Performance of Parallel and Discrete Event Simulation Systems

    Get PDF
    Null message algorithm is an important conservative time management protocol in parallel discrete event simulation systems for providing synchronization between the distributed computers with the capability of both avoiding and resolving the deadlock. However, the excessive generation of null messages prevents the widespread use of this algorithm. The excessive generation of null messages results due to an improper use of some of the critical parameters such as frequency of transmission and Lookahead values. However, if we could minimize the generation of null messages, most of the parallel discrete event simulation systems would be likely to take advantage of this algorithm in order to gain increased system throughput and minimum transmission delays. In this paper, a new mathematical model for optimizing the performance of parallel and distributed simulation systems is proposed. The proposed mathematical model utilizes various optimization techniques such as variance of null message elimination to improve the performance of parallel and distributed simulation systems. For the sake of simulation results, we consider both uniform and non-uniform distribution of Lookahead values across multiple output lines of an LP. Our experimental verifications demonstrate that an optimal NMA offers better scalability in parallel discrete event simulation systems if it is used with the proper selection of critical parameters

    Suppressing Roughness of Virtual Times in Parallel Discrete-Event Simulations

    Full text link
    In a parallel discrete-event simulation (PDES) scheme, tasks are distributed among processing elements (PEs), whose progress is controlled by a synchronization scheme. For lattice systems with short-range interactions, the progress of the conservative PDES scheme is governed by the Kardar-Parisi-Zhang equation from the theory of non-equilibrium surface growth. Although the simulated (virtual) times of the PEs progress at a nonzero rate, their standard deviation (spread) diverges with the number of PEs, hindering efficient data collection. We show that weak random interactions among the PEs can make this spread nondivergent. The PEs then progress at a nonzero, near-uniform rate without requiring global synchronizations

    Facilitating the analysis of a UK national blood service supply chain using distributed simulation

    Get PDF
    In an attempt to investigate blood unit ordering policies, researchers have created a discrete-event model of the UK National Blood Service (NBS) supply chain in the Southampton area of the UK. The model has been created using Simul8, a commercial-off-the-shelf discrete-event simulation package (CSP). However, as more hospitals were added to the model, it was discovered that the length of time needed to perform a single simulation severely increased. It has been claimed that distributed simulation, a technique that uses the resources of many computers to execute a simulation model, can reduce simulation runtime. Further, an emerging standardized approach exists that supports distributed simulation with CSPs. These CSP Interoperability (CSPI) standards are compatible with the IEEE 1516 standard The High Level Architecture, the defacto interoperability standard for distributed simulation. To investigate if distributed simulation can reduce the execution time of NBS supply chain simulation, this paper presents experiences of creating a distributed version of the CSP Simul8 according to the CSPI/HLA standards. It shows that the distributed version of the simulation does indeed run faster when the model reaches a certain size. Further, we argue that understanding the relationship of model features is key to performance. This is illustrated by experimentation with two different protocols implementations (using Time Advance Request (TAR) and Next Event Request (NER)). Our contribution is therefore the demonstration that distributed simulation is a useful technique in the timely execution of supply chains of this type and that careful analysis of model features can further increase performance

    A methodology for the decomposition of discrete event models for parallel simulation

    Get PDF
    Parallel simulation has presented the possibility of performing high-speed simulation. However, when attempting to make a link between the requirements of parallel simulation and discrete event simulation used in commercial areas such as manufacturing, a major problem arises. This lies in the decomposition of the simulation into a series of concurrently executing objects. Using the activity cycle diagram simulation technique as an illustrative example, this paper suggests a solution to this decomposition problem. This is discussed within the context of providing a conceptually seamless methodology for translating simulation models into a form which can exploit the benefits of parallel computing
    • …
    corecore