143 research outputs found

    Simulation Software as a Service and Service-Oriented Simulation Experiment

    Get PDF
    Simulation software is being increasingly used in various domains for system analysis and/or behavior prediction. Traditionally, researchers and field experts need to have access to the computers that host the simulation software to do simulation experiments. With recent advances in cloud computing and Software as a Service (SaaS), a new paradigm is emerging where simulation software is used as services that are composed with others and dynamically influence each other for service-oriented simulation experiment on the Internet. The new service-oriented paradigm brings new research challenges in composing multiple simulation services in a meaningful and correct way for simulation experiments. To systematically support simulation software as a service (SimSaaS) and service-oriented simulation experiment, we propose a layered framework that includes five layers: an infrastructure layer, a simulation execution engine layer, a simulation service layer, a simulation experiment layer and finally a graphical user interface layer. Within this layered framework, we provide a specification for both simulation experiment and the involved individual simulation services. Such a formal specification is useful in order to support systematic compositions of simulation services as well as automatic deployment of composed services for carrying out simulation experiments. Built on this specification, we identify the issue of mismatch of time granularity and event granularity in composing simulation services at the pragmatic level, and develop four types of granularity handling agents to be associated with the couplings between services. The ultimate goal is to achieve standard and automated approaches for simulation service composition in the emerging service-oriented computing environment. Finally, to achieve more efficient service-oriented simulation, we develop a profile-based partitioning method that exploits a system’s dynamic behavior and uses it as a profile to guide the spatial partitioning for more efficient parallel simulation. We develop the work in this dissertation within the application context of wildfire spread simulation, and demonstrate the effectiveness of our work based on this application

    Migrating to a real-time distributed parallel simulator architecture

    Get PDF
    The South African National Defence Force (SANDF) currently requires a system of systems simulation capability for supporting the different phases of a Ground Based Air Defence System (GBADS) acquisition program. A non-distributed, fast-as-possible simulator and its architectural predecessors developed by the Council for Scientific and Industrial Research (CSIR) was able to provide the required capability during the concept and definition phases of the acquisition life cycle. The non-distributed simulator implements a 100Hz logical time Discrete Time System Specification (DTSS) in support of the existing models. However, real-time simulation execution has become a prioritised requirement to support the development phase of the acquisition life cycle. This dissertation is about the ongoing migration of the non-distributed simulator to a practical simulation architecture that supports the real-time requirement. The simulator simulates a synthetic environment inhabited by interacting GBAD systems and hostile airborne targets. The non-distributed simulator was parallelised across multiple Commod- ity Off the Shelf (COTS) PC nodes connected by a commercial Gigabit Eth- ernet infrastructure. Since model reuse was important for cost effectiveness, it was decided to reuse all the existing models, by retaining their 100Hz logical time DTSSs. The large scale and event-based High Level Architecture (HLA), an IEEE standard for large-scale distributed simulation interoperability, had been identified as the most suitable distribution and parallelisation technology. However, two categories of risks in directly migrating to the HLA were iden- tified. The choice was made, with motivations, to mitigate the identified risks by developing a specialised custom distributed architecture. In this dissertation, the custom discrete time, distributed, peer-to-peer, message-passing architecture that has been built by the author in support of the parallelised simulator requirements, is described and analysed. It reports on empirical studies in regard to performance and flexibility. The architecture is shown to be a suitable and cost effective distributed simulator architecture for supporting a speed-up of three to four times through parallelisation of the 100 Hz logical time DTSS. This distributed architecture is currently in use and working as expected, but results in a parallelisation speed-up ceiling irrespective of the number of distributed processors. In addition, a hybrid discrete-time/discrete-event modelling approach and simulator is proposed that lowers the distributed communication and time synchronisation overhead—to improve on the scalability of the discrete time simulator—while still economically reusing the existing models. The pro- posed hybrid architecture was implemented and its real-time performance analysed. The hybrid architecture is found to support a parallelisation speed- up that is not bounded, but linearly related to the number of distributed pro- cessors up to at least the 11 processing nodes available for experimentation.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    IJMSSC

    Get PDF
    DEVS is a sound Modeling and Simulation (M&S) framework that describes a model in a modular and hierarchical way. It comes along with an abstract simulation algorithm which defines its operational semantics. Many variants of such an algorithm have been proposed by DEVS researchers. Yet, the proper interpretation and analysis of the computational complexity of such approaches have not been systematically addressed and defined. As systems become larger and more complex, the efficiency of the DEVS simulation algorithms in terms of time complexity measure becomes a major issue. Therefore, it is necessary to devise a method for computing this complexity. This paper proposes a generic method to address such an issue, taking advantage of the recursion embedded in the triggered-by-message principle of the DEVS simulation protocol. The applicability of the method is shown through the complexity analysis of various DEVS simulation algorithms

    A Quantised State Systems Approach Towards Declarative Autonomous Control

    Get PDF

    xDEVS: A toolkit for interoperable modeling and simulation of formal discrete event systems

    Get PDF
    Employing Modeling and Simulation (M&S) extensively to analyze and develop complex systems is the norm today. The use of robust M&S formalisms and rigorous methodologies is essential to deal with complexity. Among them, the Discrete Event System Specification (DEVS) provides a solid framework for modeling structural, behavior and information aspects of any complex system. This gives several advantages to analyze and design complex systems: completeness, verifiability, extensibility, and maintainability. DEVS formalism has been implemented in many programming languages and executable on multiple platforms. In this paper, we describe the features of an M&S framework called xDEVS that builds upon the prevalent DEVS Application Programming Interface (API) for both modeling and simulation layers, promoting interoperability between the existing platform-specific (C++, Java, Python) DEVS implementations. Additionally, the framework can simulate the same model using sequential, parallel, or distributed architectures. The M&S engine has been reinforced with several strategies to improve performance, as well as tools to perform model analysis and verification. Finally, xDEVS also facilitates systems engineers to apply the vision of model-based systems engineering (MBSE), model-driven engineering (MDE), and model-driven systems engineering (MDSE) paradigms. We highlight the features of the proposed xDEVS framework with multiple examples and case studies illustrating the rigor and diversity of application domains it can support

    MECSYCO: a Multi-agent DEVS Wrapping Platform for the Co-simulation of Complex Systems

    Get PDF
    Most modeling and simulation (M&S) questions about complex systems require to take simultaneously account of several points of view. Phenomena evolving at different scales and at different levels of resolution have to be considered. Moreover, expert skills belonging to different scientific fields are needed. The challenges are then to reconcile these heterogeneous points of view, and to integrate each domain tools (formalisms and simulation software) within the rigorous framework of the M&S process. To answer to this issue, we propose here the specifications of the MECSYCO co-simulation middleware. MECSYCO relies on the universality of the DEVS formalism in order to integrate models written in different formalism. This integration is based on a wrapping strategy in order to make models implemented in different simulation software inter-operable. The middleware performs the co-simulation in a parallel, decentralized and distributable fashion thanks to its modular multi-agent architecture. We detail how MECSYCO perform hybrid co-simulations by integrating in a generic way already implemented continuous models thanks to the FMI standard, the DEV&DESS formalism and the QSS method. The DEVS wrapping of FMI that we propose is not restricted to MECSYCO but can be performed in any DEVS-based platform. We show the modularity and the genericity of our approach through an iterative smart heating system M&S. Compared to other works in the literature, our proposition is generic thanks to the strong foundation of DEVS and the unifying features of the FMI standard, while being fully specified from the concepts to their implementations

    Design of medium access control techniques for cooperative wireless networks

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Techniques for Transparent Parallelization of Discrete Event Simulation Models

    Get PDF
    Simulation is a powerful technique to represent the evolution of real-world phenomena or systems over time. It has been extensively used in different research fields (from medicine to biology, to economy, and to disaster rescue) to study the behaviour of complex systems during their evolution (symbiotic simulation) or before their actual realization (what-if analysis). A traditional way to achieve high performance simulations is the employment of Parallel Discrete Event Simulation (PDES) techniques, which are based on the partitioning of the simulation model into Logical Processes (LPs) that can execute events in parallel on different CPUs and/or different CPU cores, and rely on synchronization mechanisms to achieve causally consistent execution of simulation events. As it is well recognized, the optimistic synchronization approach, namely the Time Warp protocol, which is based on rollback for recovering possible timestamp-order violations due to the absence of block-until-safe policies for event processing, is likely to favour speedup in general application/ architectural contexts. However, the optimistic PDES paradigm implicitly relies on a programming model that shifts from traditional sequential-style programming, given that there is no notion of global address space (fully accessible while processing events at any LP). Furthermore, there is the underlying assumption that the code associated with event handlers cannot execute unrecoverable operations given their speculative processing nature. Nevertheless, even though no unrecoverable action is ever executed by event handlers, a means to actually undo the action if requested needs to be devised and implemented within the software stack. On the other hand, sequential-style programming is an easy paradigm for the development of simulation code, given that it does not require the programmer to reason about memory partitioning (and therefore message passing) and speculative (concurrent) processing of the application. In this thesis, we present methodological and technical innovations which will show how it is possible, by developing innovative runtime mechanisms, to allow a programmer to implement its simulation model in a fully sequential way, and have the underlying simulation framework to execute it in parallel according to speculative processing techniques. Some of the approaches we provide show applicability in either shared- or distributed-memory systems, while others will be specifically tailored to multi/many-core architectures. We will clearly show, during the development of these supports, what is the effect on performance of these solutions, which will nevertheless be negligible, allowing a fruitful exploitation of the available computing power. In the end, we will highlight which are the clear benefits on the programming model tha
    corecore