28 research outputs found

    HP-CERTI: Towards a high performance, high availability open source RTI for composable simulations (04F-SIW-014)

    Get PDF
    Composing simulations of complex systems from already existing simulation components remains a challenging issue. Motivations for composable simulation include generation of a given federation driven by operational requirements provided "on the fly". The High Level Architecture, initially developed for designing fully distributed simulations, can be considered as an interoperability standard for composing simulations from existing components. Requirements for constructing such complex simulations are quite different from those discussed for distributed simulations. Although interoperability and reusability remain essential, both high performance and availability have also to be considered to fulfill the requirements of the end user. ONERA is currently designing a High Performance / High Availability HLA Run-time Infrastructure from its open source implementation of HLA 1.3 specifications. HP-CERTI is a software package including two main components: the first one, SHM-CERTI, provides an optimized version of CERTI based on a shared memory communication scheme; the second one, Kerrighed-CERTI, allows the deployment of CERTI through the control of the Kerrighed Single System Image operating system for clusters, currently designed by IRISA. This paper describes the design of both high performance and availability Runtime Infrastructures, focusing on the architecture of SHM-CERTI. This work is carried out in the context of the COCA (High Performance Distributed Simulation and Models Reuse) Project, sponsored by the DGA/STTC (Délégation Générale pour l'Armement/Service des Stratégies Techniques et des Technologies Communes) of the French Ministry of Defense

    Managing Bandwidth and Traffic via Bundling and Filtration in Large-Scale Distributed Simulations

    Get PDF
    Research has shown that bandwidth can be a limiting factor in the performance of distributed simulations. The Air Force\u27s Distributed Mission Operations Center (DMOC) periodically hosts one of the largest distributed simulation events in the world. The engineers at the DMOC have dealt with the difficult problem of limited bandwidth by implementing application level filters that process all DIS PDUs between the various networks connected to the exercise. This thesis examines their implemented filter and proposes: adaptive range-based filtering and bundling together of PDUs. The goals are to reduce the number of PDUs passed by the adaptive filter and to reduce network overhead and the total amount of data transferred by maximizing packet size up to the MTU. The proposed changes were implemented and logged data from previous events were used on a test network in order to measure the improvement from the base filter to the improved filter. The results showed that the adaptive range based filter was effective, though minimally so, and that the PDU bundling resulted in a reduction of 17% to 20% of the total traffic transmitted across the network

    Towards an Architecture Proposal for Federation of Distributed DES Simulators

    Get PDF
    The simulation of large and complex Discrete Event Systems (DESs) increasingly imposes more demanding and urgent requirements on two aspects accepted as critical: (1) Intensive use of models of the simulated system that can be exploited in all phases of its life cycle where simulation can be used, and methodologies for these purposes; (2) Adaptation of simulation techniques to HPC infrastructures, as a method to improve simulation efficiency and to have scalable simulation environments. This paper proposes a Model Driven Engineering approach (MDE) based on Petri Nets (PNs) as formal model. This approach proposes a domain specific language based on modular PNs from which efficient distributed simulation code is generated in an automatic way. The distributed simulator is constructed over generic simulation engines of PNs, each one containing a data structure representing a piece of net and its simulation state. The simulation engine is called simbot and versions of it are available for different platforms. The proposed architecture allows, in an efficient way, a dynamic load balancing of the simulation work because the moving of PN pieces can be realized by moving a small number of integers representing the subnet and its state

    Multi Objective PSO with Passive Congregation for Load Balancing Problem

    Get PDF
    High-level architecture (HLA) and Distributed Interactive Simulation (DIS) are commonly used for the distributed system. However, HLA suffers from a resource allocation problem and to solve this issue, optimization of load balancing is required. Efficient load balancing can minimize the simulation time of HLA and this optimization can be done using the multi-objective evolutionary algorithms (MOEA). Multi-Objective Particle Swarm Optimization (MOPSO) based on crowding distance (CD) is a popular MOEA method used to balance HLA load. In this research, the efficiency of MOPSO-CD is further improved by introducing the passive congregation (PC) method. Several simulation tests are done on this improved MOPSO-CD-PC method and the results showed that in terms of Coverage, Spacing, Non-dominated solutions and Inverted generational distance metrics, the MOPSO-CD-PC performed better than the previous MOPSO-CD algorithm. Hence, it can be a useful tool to optimize the load balancing problem in HLA

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    MODELLING & SIMULATION HYBRID WARFARE Researches, Models and Tools for Hybrid Warfare and Population Simulation

    Get PDF
    The Hybrid Warfare phenomena, which is the subject of the current research, has been framed by the work of Professor Agostino Bruzzone (University of Genoa) and Professor Erdal Cayirci (University of Stavanger), that in June 2016 created in order to inquiry the subject a dedicated Exploratory Team, which was endorsed by NATO Modelling & Simulation Group (a panel of the NATO Science & Technology organization) and established with the participation as well of the author. The author brought his personal contribution within the ET43 by introducing meaningful insights coming from the lecture of \u201cFight by the minutes: Time and the Art of War (1994)\u201d, written by Lieutenant Colonel US Army (Rtd.) Robert Leonhard; in such work, Leonhard extensively developed the concept that \u201cTime\u201d, rather than geometry of the battlefield and/or firepower, is the critical factor to tackle in military operations and by extension in Hybrid Warfare. The critical reflection about the time - both in its quantitative and qualitative dimension - in a hybrid confrontation it is addressed and studied inside SIMCJOH, a software built around challenges that imposes literally to \u201cFight by the minutes\u201d, echoing the core concept expressed in the eponymous work. Hybrid Warfare \u2013 which, by definition and purpose, aims to keep the military commitment of both aggressor and defender at the lowest - can gain enormous profit by employing a wide variety of non-military tools, turning them into a weapon, as in the case of the phenomena of \u201cweaponization of mass migrations\u201d, as it is examined in the \u201cDies Irae\u201d simulation architecture. Currently, since migration it is a very sensitive and divisive issue among the public opinions of many European countries, cynically leveraging on a humanitarian emergency caused by an exogenous, inducted migration, could result in a high level of political and social destabilization, which indeed favours the concurrent actions carried on by other hybrid tools. Other kind of disruption however, are already available in the arsenal of Hybrid Warfare, such cyber threats, information campaigns lead by troll factories for the diffusion of fake/altered news, etc. From this perspective the author examines how the TREX (Threat network simulation for REactive eXperience) simulator is able to offer insights about a hybrid scenario characterized by an intense level of social disruption, brought by cyber-attacks and systemic faking of news. Furthermore, the rising discipline of \u201cStrategic Engineering\u201d, as envisaged by Professor Agostino Bruzzone, when matched with the operational requirements to fulfil in order to counter Hybrid Threats, it brings another innovative, as much as powerful tool, into the professional luggage of the military and the civilian employed in Defence and Homeland security sectors. Hybrid is not the New War. What is new is brought by globalization paired with the transition to the information age and rising geopolitical tensions, which have put new emphasis on hybrid hostilities that manifest themselves in a contemporary way. Hybrid Warfare is a deliberate choice of an aggressor. While militarily weak nations can resort to it in order to re-balance the odds, instead military strong nations appreciate its inherent effectiveness coupled with the denial of direct responsibility, thus circumventing the rules of the International Community (IC). In order to be successful, Hybrid Warfare should consist of a highly coordinated, sapient mix of diverse and dynamic combination of regular forces, irregular forces (even criminal elements), cyber disruption etc. all in order to achieve effects across the entire DIMEFIL/PMESII_PT spectrum. However, the owner of the strategy, i.e. the aggressor, by keeping the threshold of impunity as high as possible and decreasing the willingness of the defender, can maintain his Hybrid Warfare at a diplomatically feasible level; so the model of the capacity, willingness and threshold, as proposed by Cayirci, Bruzzone and Gunneriusson (2016), remains critical to comprehend Hybrid Warfare. Its dynamicity is able to capture the evanescent, blurring line between Hybrid Warfare and Conventional Warfare. In such contest time is the critical factor: this because it is hard to foreseen for the aggressor how long he can keep up with such strategy without risking either the retaliation from the International Community or the depletion of resources across its own DIMEFIL/PMESII_PT spectrum. Similar discourse affects the defender: if he isn\u2019t able to cope with Hybrid Threats (i.e. taking no action), time works against him; if he is, he can start to develop counter narrative and address physical countermeasures. However, this can lead, in the medium long period, to an unforeseen (both for the attacker and the defender) escalation into a large, conventional, armed conflict. The performance of operations that required more than kinetic effects drove the development of DIMEFIL/PMESII_PT models and in turn this drive the development of Human Social Culture Behavior Modelling (HCSB), which should stand at the core of the Hybrid Warfare modelling and simulation efforts. Multi Layers models are fundamental to evaluate Strategies and Support Decisions: currently there are favourable conditions to implement models of Hybrid Warfare, such as Dies Irae, SIMCJOH and TREX, in order to further develop tools and war-games for studying new tactics, execute collective training and to support decisions making and analysis planning. The proposed approach is based on the idea to create a mosaic made by HLA interoperable simulators able to be combined as tiles to cover an extensive part of the Hybrid Warfare, giving the users an interactive and intuitive environment based on the \u201cModelling interoperable Simulation and Serious Game\u201d (MS2G) approach. From this point of view, the impressive capabilities achieved by IA-CGF in human behavior modeling to support population simulation as well as their native HLA structure, suggests to adopt them as core engine in this application field. However, it necessary to highlight that, when modelling DIMEFIL/PMESII_PT domains, the researcher has to be aware of the bias introduced by the fact that especially Political and Social \u201cscience\u201d are accompanied and built around value judgement. From this perspective, the models proposed by Cayirci, Bruzzone, Guinnarson (2016) and by Balaban & Mileniczek (2018) are indeed a courageous tentative to import, into the domain of particularly poorly understood phenomena (social, politics, and to a lesser degree economics - Hartley, 2016), the mathematical and statistical instruments and the methodologies employed by the pure, hard sciences. Nevertheless, just using the instruments and the methodology of the hard sciences it is not enough to obtain the objectivity, and is such aspect the representations of Hybrid Warfare mechanics could meet their limit: this is posed by the fact that they use, as input for the equations that represents Hybrid Warfare, not physical data observed during a scientific experiment, but rather observation of the reality that assumes implicitly and explicitly a value judgment, which could lead to a biased output. Such value judgement it is subjective, and not objective like the mathematical and physical sciences; when this is not well understood and managed by the academic and the researcher, it can introduce distortions - which are unacceptable for the purpose of the Science - which could be used as well to enforce a narrative mainstream that contains a so called \u201ctruth\u201d, which lies inside the boundary of politics rather than Science. Those observations around subjectivity of social sciences vs objectivity of pure sciences, being nothing new, suggest however the need to examine the problem under a new perspective, less philosophical and more leaned toward the practical application. The suggestion that the author want make here is that the Verification and Validation process, in particular the methodology used by Professor Bruzzone in doing V&V for SIMCJOH (2016) and the one described in the Modelling & Simulation User Risk Methodology (MURM) developed by Pandolfini, Youngblood et all (2018), could be applied to evaluate if there is a bias and the extent of the it, or at least making clear the value judgment adopted in developing the DIMEFIL/PMESII_PT models. Such V&V research is however outside the scope of the present work, even though it is an offspring of it, and for such reason the author would like to make further inquiries on this particular subject in the future. Then, the theoretical discourse around Hybrid Warfare has been completed addressing the need to establish a new discipline, Strategic Engineering, very much necessary because of the current a political and economic environment which allocates diminishing resources to Defense and Homeland Security (at least in Europe). However, Strategic Engineering can successfully address its challenges when coupled with the understanding and the management of the fourth dimension of military and hybrid operations, Time. For the reasons above, and as elaborated by Leonhard and extensively discussed in the present work, addressing the concern posed by Time dimension is necessary for the success of any military or Hybrid confrontation. The SIMCJOH project, examined under the above perspective, proved that the simulator has the ability to address the fourth dimension of military and non-military confrontation. In operations, Time is the most critical factor during execution, and this was successfully transferred inside the simulator; as such, SIMCJOH can be viewed as a training tool and as well a dynamic generator of events for the MEL/MIL execution during any exercise. In conclusion, SIMCJOH Project successfully faces new challenging aspects, allowed to study and develop new simulation models in order to support decision makers, Commanders and their Staff. Finally, the question posed by Leonhard in terms of recognition of the importance of time management of military operations - nowadays Hybrid Conflict - has not been answered yet; however, the author believes that Modelling and Simulation tools and techniques can represent the safe \u201ctank\u201d where innovative and advanced scientific solutions can be tested, exploiting the advantage of doing it in a synthetic environment

    Resource-constraint And Scalable Data Distribution Management For High Level Architecture

    Get PDF
    In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences
    corecore