716 research outputs found

    The Quest for Scalability and Accuracy in the Simulation of the Internet of Things: an Approach based on Multi-Level Simulation

    Full text link
    This paper presents a methodology for simulating the Internet of Things (IoT) using multi-level simulation models. With respect to conventional simulators, this approach allows us to tune the level of detail of different parts of the model without compromising the scalability of the simulation. As a use case, we have developed a two-level simulator to study the deployment of smart services over rural territories. The higher level is base on a coarse grained, agent-based adaptive parallel and distributed simulator. When needed, this simulator spawns OMNeT++ model instances to evaluate in more detail the issues concerned with wireless communications in restricted areas of the simulated world. The performance evaluation confirms the viability of multi-level simulations for IoT environments.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2017

    Parallel Sort-Based Matching for Data Distribution Management on Shared-Memory Multiprocessors

    Full text link
    In this paper we consider the problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles. This is a common problem that arises in many agent-based simulation studies, and is of central importance in the context of High Level Architecture (HLA), where it is at the core of the Data Distribution Management (DDM) service. Several realizations of the DDM service have been proposed; however, many of them are either inefficient or inherently sequential. These are serious limitations since multicore processors are now ubiquitous, and DDM algorithms -- being CPU-intensive -- could benefit from additional computing power. We propose a parallel version of the Sort-Based Matching algorithm for shared-memory multiprocessors. Sort-Based Matching is one of the most efficient serial algorithms for the DDM problem, but is quite difficult to parallelize due to data dependencies. We describe the algorithm and compute its asymptotic running time; we complete the analysis by assessing its performance and scalability through extensive experiments on two commodity multicore systems based on a dual socket Intel Xeon processor, and a single socket Intel Core i7 processor.Comment: Proceedings of the 21-th ACM/IEEE International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2017). Best Paper Award @DS-RT 201

    Fault-Tolerant Adaptive Parallel and Distributed Simulation

    Full text link
    Discrete Event Simulation is a widely used technique that is used to model and analyze complex systems in many fields of science and engineering. The increasingly large size of simulation models poses a serious computational challenge, since the time needed to run a simulation can be prohibitively large. For this reason, Parallel and Distributes Simulation techniques have been proposed to take advantage of multiple execution units which are found in multicore processors, cluster of workstations or HPC systems. The current generation of HPC systems includes hundreds of thousands of computing nodes and a vast amount of ancillary components. Despite improvements in manufacturing processes, failures of some components are frequent, and the situation will get worse as larger systems are built. In this paper we describe FT-GAIA, a software-based fault-tolerant extension of the GAIA/ART\`IS parallel simulation middleware. FT-GAIA transparently replicates simulation entities and distributes them on multiple execution nodes. This allows the simulation to tolerate crash-failures of computing nodes; furthermore, FT-GAIA offers some protection against byzantine failures since synchronization messages are replicated as well, so that the receiving entity can identify and discard corrupted messages. We provide an experimental evaluation of FT-GAIA on a running prototype. Results show that a high degree of fault tolerance can be achieved, at the cost of a moderate increase in the computational load of the execution units.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2016

    Anonymity and Confidentiality in Secure Distributed Simulation

    Full text link
    Research on data confidentiality, integrity and availability is gaining momentum in the ICT community, due to the intrinsically insecure nature of the Internet. While many distributed systems and services are now based on secure communication protocols to avoid eavesdropping and protect confidentiality, the techniques usually employed in distributed simulations do not consider these issues at all. This is probably due to the fact that many real-world simulators rely on monolithic, offline approaches and therefore the issues above do not apply. However, the complexity of the systems to be simulated, and the rise of distributed and cloud based simulation, now impose the adoption of secure simulation architectures. This paper presents a solution to ensure both anonymity and confidentiality in distributed simulations. A performance evaluation based on an anonymized distributed simulator is used for quantifying the performance penalty for being anonymous. The obtained results show that this is a viable solution.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2018

    Grid-enabling FIRST: Speeding up simulation applications using WinGrid

    Get PDF
    The vision of grid computing is to make computational power, storage capacity, data and applications available to users as readily as electricity and other utilities. Grid infrastructures and applications have traditionally been geared towards dedicated, centralized, high performance clusters running on UNIX flavour operating systems (commonly referred to as cluster-based grid computing). This can be contrasted with desktop-based grid computing which refers to the aggregation of non-dedicated, de-centralized, commodity PCs connected through a network and running (mostly) the Microsoft Windowstrade operating system. Large scale adoption of such Windowstrade-based grid infrastructure may be facilitated via grid-enabling existing Windows applications. This paper presents the WinGridtrade approach to grid enabling existing Windowstrade based commercial-off-the-shelf (COTS) simulation packages (CSPs). Through the use of a case study developed in conjunction with Ford Motor Company, the paper demonstrates how experimentation with the CSP Witnesstrade and FIRST can achieve a linear speedup when WinGridtrade is used to harness idle PC computing resources. This, combined with the lessons learned from the case study, has encouraged us to develop the Web service extensions to WinGridtrade. It is hoped that this would facilitate wider acceptance of WinGridtrade among enterprises having stringent security policies in place

    Exploring the use of local consistency measures as thresholds for dead reckoning update packet generation

    Get PDF
    Human-to-human interaction across distributed applications requires that sufficient consistency be maintained among participants in the face of network characteristics such as latency and limited bandwidth. Techniques and approaches for reducing bandwidth usage can minimize network delays by reducing the network traffic and therefore better exploiting available bandwidth. However, these approaches induce inconsistencies within the level of human perception. Dead reckoning is a well-known technique for reducing the number of update packets transmitted between participating nodes. It employs a distance threshold for deciding when to generate update packets. This paper questions the use of such a distance threshold in the context of absolute consistency and it highlights a major drawback with such a technique. An alternative threshold criterion based on time and distance is examined and it is compared to the distance only threshold. A drawback with this proposed technique is also identified and a hybrid threshold criterion is then proposed. However, the trade-off between spatial and temporal inconsistency remains
    • …
    corecore