15 research outputs found

    Running real time distributed simulations under Linux and CERTI

    Get PDF
    This paper presents some experiments and some results to enforce real time distributed simulations in accordance with the High Level Architecture (HLA). Simulations were run by using CERTI, an open source middleware, as the Run Time Infrastructure (RTI). Models were distributed over computers under various available versions of the 2.6 Linux kernel. Studies and experiments relied on a real case study. The chosen case study was the simulation of an "in formation" flight of observation satellites. This case study brings up some real applicative needs in real time distributed simulations and real configurations of simulators and models. Two simulations of "in formation" flight of satellites were studied. The study consisted in modeling the behaviour of the simulators and in running these models by using various kernel or middleware operating mechanisms and services. Time measurements were performed at each test giving some results on the ability of the simulation to meet its real time requirements

    Time granularity impact on propagation of disruptions in a system-of-systems simulation of infrastructure and business networks

    Full text link
    System-of-systems (SoS) approach is often used for simulating disruptions to business and infrastructure system networks allowing for integration of several models into one simulation. However, the integration is frequently challenging as each system is designed individually with different characteristics, such as time granularity. Understanding the impact of time granularity on propagation of disruptions between businesses and infrastructure systems and finding the appropriate granularity for the SoS simulation remain as major challenges. To tackle these, we explore how time granularity, recovery time, and disruption size affect the propagation of disruptions between constituent systems of an SoS simulation. To address this issue, we developed a High Level Architecture (HLA) simulation of 3 networks and performed a series of simulation experiments. Our results revealed that time granularity and especially recovery time have huge impact on propagation of disruptions. Consequently, we developed a model for selecting an appropriate time granularity for an SoS simulation based on expected recovery time. Our simulation experiments show that time granularity should be less than 1.13 of expected recovery time. We identified some areas for future research centered around extending the experimental factors space.Comment: 26 pages, 11 figures, 2 tables, Submitted to International Journal of Environmental Research and Public Health: Special Issue on Cascading Disaster Modelling and Preventio

    Bridging the gap: a standards-based approach to OR/MS distributed simulation

    Get PDF
    Pre-print version. Final version published in ACM Transactions on Modeling and Computer Simulation (TOMACS); available online at http://tomacs.acm.org/In Operations Research and Management Science (OR/MS), Discrete Event Simulation (DES) models are typically created using commercial simulation packages such as Simul8™ and SLX™. A DES model represents the processes associated with a system of interest; but, in cases where the underlying system is large and/or logically divided, the system may be conceptualized as several sub-systems. These sub-systems may belong to multiple stakeholders, and creating an all-encompassing DES model may be difficult for reasons such as, concerns among the intra- and inter-organizational stakeholders with regard to data/information sharing (e.g., security and privacy). Furthermore, issues such as model composability, data transfer/access problems and execution speed may also make a single model approach problematic. A potential solution could be to create/reuse well-defined DES models, each modeling the processes associated with one sub-system, and using distributed simulation technique to execute the models as a unified whole. Although this approach holds great promise, there are technical barriers. One such barrier is the lack of common ground between distributed simulation developers and simulation practitioners. In an attempt to bridge this gap, this paper reports on the outcome of an international standardization effort, the SISO-STD-006-2010 Standard for Commercial-Off-The-Shelf Simulation Package Interoperability References Models (IRMs). This facilitates the capture of interoperability requirements at a modeling level rather than a technical level and enables simulation practitioners and vendors to properly specify the interoperability requirements of a distributed simulation in their terms. Two distributed simulation examples are given to illustrate the use of IRMs

    Optimistic synchronization in the HLA 1516.1-2010: interoperably challenged

    Get PDF
    Time Management can be considered as one of the key achievements of the High Level Architecture for Modeling and Simulation (HLA). While HLA’s time management is supposed to offer a unique support for heterogeneous time advancement schemes, its practical use is often limited to conservative time advancement (e.g. using services such as nextMessageRequest/nextMessageRequestAvailable) or time stepped time advancement (e.g. using services such as timeAdvanceRequest/timeAdvanceRequestAvailable). In this paper, we investigate HLA’s capabilities for supporting optimistic time advancement and the interoperability between optimistic and conservative federates. The results are strikingly disappointing. While HLA had initially taken off with the noble vision of federations including both optimistic and conservative federates within a single federation execution, the current implementations of two leading RTI vendors fall short of achieving this objective. Neither do they enable the efficient execution of federations consisting of purely optimistically synchronized federates nor do they facilitate interoperability between optimistic and conservative federates. This paper documents the observed problems and discusses potential limitations in the IEEE HLA 1516.1-2010 specification and its interpretation by RTI vendors

    A study on Discrete Event Simulation (DES) in a High-Level Architecture (HLA) networked simulation

    Get PDF
    This thesis investigates implementing Discrete Event Simulation (DES) concepts using Simkit packages into a High- Level Architecture (HLA)-networked simulation, thus addressing sustainability of the HLA methodology into the future. Through the DES concept of predicting and anticipating the time of when events will occur, redundant and excessive exchange of common data, like position and sensory status, can be removed. This DES implementation considerably reduces the network load and removes data processing incompatibility between simulations. A design involving several concepts of DES and HLA simulation led to the creation of a Simkit based application library. Interfacing this application library with two DES models demonstrated and proved the feasibility of DES concepts in HLA-networked simulations. A generic combat scenario modeled using this methodology, successfully showed the intended advantages of the thesis. The ease of linking non-DES and non-HLA simulations to an HLA environment was enhanced using a common set of interfaces built based on the resulting application library. Through a simple comparison with traditional time-stepped real-time simulation of the same scenario configuration, it was shown that data exchange between simulations was reduced by several orders of magnitude. This freed a substantial amount of network resources to perform other tasks, hence, improving network performance.http://archive.org/details/astudyondiscrete109454958(Simulation Systems) author (civilian)Approved for public release; distribution is unlimited

    The Distributed Independent-Platform Event-Driven Simulation Engine Library (DIESEL)

    Get PDF
    The Distributed, Independent-Platform, Event-Driven Simulation Engine Library (DIESEL) is a simulation executive, capable of supporting both sequential and distributed discrete-event simulations. A system level specification is provided along with the expected behavior of each component within DIESEL. This behavioral specification of each component, along with the interconnection and interaction between the different components, provides a complete description of the DIESEL behavioral model. The model provides a considerable amount of freedom for an application developer to partition the simulation model, when building sequential and distributed applications with respect to balancing the number of events generated across different components. It also allows a developer to modify underlying algorithms in the simulation executive, while causing no changes to the overall system behavior so long as the algorithms meet the behavioral specifications. The behavioral model is object-oriented and developed using a hierarchical approach. The model is not targeted towards any programming language or hardware platform for implementation. The behavioral specification provides no specifics about how the model should be implemented. A complete and stable implementation of the behavioral model is provided as a proof-of-concept, and can be used to develop commercial applications. New and independent implementations of the complete model can be developed to support specific commercial and research efforts. Specific components of the model can also be implemented by students in an educational environment, using strategies different from the ones used within the current implementation. DIESEL provides a research environment for studying different aspects of Parallel Discrete-Event Simulation, such as event management strategies, synchronization algorithms, communication mechanisms, and simulation state capture capabilities

    Exploring Delphi Method Generated Synthetic Natural Environment (SNE) Visual Aesthetic Quality (VAQ) Factor Forecasts and Preferences through Conjoint Analysis of End User Assessments

    Get PDF
    Traditional techniques used for verification, validation, and accreditation (VV&A) of Synthetic Natural Environments for military applications are time consuming, subjective, and often costly. Due to varying levels of common visual factors, Synthetic Natural Environments (SNE) vary widely in appearance and use case. Early identification of these factors in the SNE life cycle may improve its Visual Aesthetic Quality (VAQ) while reducing VV&A issues downstream and informing future development. This research explores supplementing existing VV&A techniques with the Delphi Method during the conceptualization phase of an interoperable SNE development in order to identify the level of importance of SNE VAQ factors for distributed, dissimilar simulations earlier in the life cycle. Delphi Method findings on VAQ factors drove the development of four different SNEs for a selected urban city center. The importance of VAQ factors within the SNEs were derived through Conjoint Analysis of data from a survey in which end user participants evaluated each SNE using a design that incorporated fractional factorial screening and Graeco-Latin Squares. Research findings suggest: (1) using an online Delphi Method enables early identification of a correlated set of expertly accepted primary VAQ factors that affect overall realism and training utility in the virtual domain; (2) Conjoint analysis improves the understanding of the significance and power of identified factors and preferences; (3) VAQ importance rankings differed across the Delphi Method and Conjoint Analysis, nor did the Delphi Method successfully predict the two-factor interactions discovered through Conjoint Analysis of the screening design; and (4) Data mining of historical SNE issue reports did not identify the same level of importance of VAQ factors as users reviewing SNE representations through a Conjoint Analysis and Delphi panel expert forecasts. Limitations with the proposed technique, as well as recommendations for additional research are provided to further refine the parameters associated with these subjective factors to increase the efficiency and application of the proposed approach
    corecore