685 research outputs found

    LUNES: Agent-based Simulation of P2P Systems (Extended Version)

    Full text link
    We present LUNES, an agent-based Large Unstructured NEtwork Simulator, which allows to simulate complex networks composed of a high number of nodes. LUNES is modular, since it splits the three phases of network topology creation, protocol simulation and performance evaluation. This permits to easily integrate external software tools into the main software architecture. The simulation of the interaction protocols among network nodes is performed via a simulation middleware that supports both the sequential and the parallel/distributed simulation approaches. In the latter case, a specific mechanism for the communication overhead-reduction is used; this guarantees high levels of performance and scalability. To demonstrate the efficiency of LUNES, we test the simulator with gossip protocols executed on top of networks (representing peer-to-peer overlays), generated with different topologies. Results demonstrate the effectiveness of the proposed approach.Comment: Proceedings of the International Workshop on Modeling and Simulation of Peer-to-Peer Architectures and Systems (MOSPAS 2011). As part of the 2011 International Conference on High Performance Computing and Simulation (HPCS 2011

    The Simulation Model Partitioning Problem: an Adaptive Solution Based on Self-Clustering (Extended Version)

    Full text link
    This paper is about partitioning in parallel and distributed simulation. That means decomposing the simulation model into a numberof components and to properly allocate them on the execution units. An adaptive solution based on self-clustering, that considers both communication reduction and computational load-balancing, is proposed. The implementation of the proposed mechanism is tested using a simulation model that is challenging both in terms of structure and dynamicity. Various configurations of the simulation model and the execution environment have been considered. The obtained performance results are analyzed using a reference cost model. The results demonstrate that the proposed approach is promising and that it can reduce the simulation execution time in both parallel and distributed architectures

    Fault Tolerant Adaptive Parallel and Distributed Simulation through Functional Replication

    Full text link
    This paper presents FT-GAIA, a software-based fault-tolerant parallel and distributed simulation middleware. FT-GAIA has being designed to reliably handle Parallel And Distributed Simulation (PADS) models, which are needed to properly simulate and analyze complex systems arising in any kind of scientific or engineering field. PADS takes advantage of multiple execution units run in multicore processors, cluster of workstations or HPC systems. However, large computing systems, such as HPC systems that include hundreds of thousands of computing nodes, have to handle frequent failures of some components. To cope with this issue, FT-GAIA transparently replicates simulation entities and distributes them on multiple execution nodes. This allows the simulation to tolerate crash-failures of computing nodes. Moreover, FT-GAIA offers some protection against Byzantine failures, since interaction messages among the simulated entities are replicated as well, so that the receiving entity can identify and discard corrupted messages. Results from an analytical model and from an experimental evaluation show that FT-GAIA provides a high degree of fault tolerance, at the cost of a moderate increase in the computational load of the execution units.Comment: arXiv admin note: substantial text overlap with arXiv:1606.0731

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Anonymity and Confidentiality in Secure Distributed Simulation

    Full text link
    Research on data confidentiality, integrity and availability is gaining momentum in the ICT community, due to the intrinsically insecure nature of the Internet. While many distributed systems and services are now based on secure communication protocols to avoid eavesdropping and protect confidentiality, the techniques usually employed in distributed simulations do not consider these issues at all. This is probably due to the fact that many real-world simulators rely on monolithic, offline approaches and therefore the issues above do not apply. However, the complexity of the systems to be simulated, and the rise of distributed and cloud based simulation, now impose the adoption of secure simulation architectures. This paper presents a solution to ensure both anonymity and confidentiality in distributed simulations. A performance evaluation based on an anonymized distributed simulator is used for quantifying the performance penalty for being anonymous. The obtained results show that this is a viable solution.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2018

    Inference of Selection Based on Temporal Genetic Differentiation in the Study of Highly Polymorphic Multigene Families

    Get PDF
    The co-evolutionary arms race between host immune genes and parasite virulence genes is known as Red Queen dynamics. Temporal fluctuations in allele frequencies, or the ‘turnover’ of alleles at immune genes, are concordant with predictions of the Red Queen hypothesis. Such observations are often taken as evidence of host-parasite co-evolution. Here, we use computer simulations of the Major Histocompatibility Complex (MHC) of guppies (Poecilia reticulata) to study the turnover rate of alleles (temporal genetic differentiation, G’ST). Temporal fluctuations in MHC allele frequencies can be $#order of magnitude larger than changes observed at neutral loci. Although such large fluctuations in the MHC are consistent with Red Queen dynamics, simulations show that other demographic and population genetic processes can account for this observation, these include: (1) overdominant selection, (2) fluctuating population size within a metapopulation, and (3) the number of novel MHC alleles introduced by immigrants when there are multiple duplicated genes. Synergy between these forces combined with migration rate and the effective population size can drive the rapid turnover in MHC alleles. We posit that rapid allelic turnover is an inherent property of highly polymorphic multigene families and that it cannot be taken as evidence of Red Queen dynamics. Furthermore, combining temporal samples in spatial FST outlier analysis may obscure the signal of selection

    Distributed Hybrid Simulation of the Internet of Things and Smart Territories

    Full text link
    This paper deals with the use of hybrid simulation to build and compose heterogeneous simulation scenarios that can be proficiently exploited to model and represent the Internet of Things (IoT). Hybrid simulation is a methodology that combines multiple modalities of modeling/simulation. Complex scenarios are decomposed into simpler ones, each one being simulated through a specific simulation strategy. All these simulation building blocks are then synchronized and coordinated. This simulation methodology is an ideal one to represent IoT setups, which are usually very demanding, due to the heterogeneity of possible scenarios arising from the massive deployment of an enormous amount of sensors and devices. We present a use case concerned with the distributed simulation of smart territories, a novel view of decentralized geographical spaces that, thanks to the use of IoT, builds ICT services to manage resources in a way that is sustainable and not harmful to the environment. Three different simulation models are combined together, namely, an adaptive agent-based parallel and distributed simulator, an OMNeT++ based discrete event simulator and a script-language simulator based on MATLAB. Results from a performance analysis confirm the viability of using hybrid simulation to model complex IoT scenarios.Comment: arXiv admin note: substantial text overlap with arXiv:1605.0487

    Evolutionary genetics of immunological supertypes reveals two faces of the Red Queen

    Get PDF
    Red Queen host-parasite co-evolution can drive adaptations of immune-genes by positive selection that erodes genetic variation (Red Queen Arms Race), or result in a balanced polymorphism (Red Queen Dynamics) and the long-term preservation of genetic variation (trans-species polymorphism). These two Red Queen processes are opposite extremes of the co-evolutionary spectrum. Here we show that both Red Queen processes can operate simultaneously, analyzing the Major Histocompatibility Complex (MHC) in guppies (Poecilia reticulata and P. obscura), and swamp guppies (Micropoecilia picta). Sub-functionalization of MHC alleles into “supertypes” explains how polymorphisms persist during rapid host-parasite co-evolution. Simulations show the maintenance of supertypes as balanced polymorphisms, consistent with Red Queen Dynamics, whereas alleles within supertypes are subject to positive selection in a Red Queen Arms Race. Building on the Divergent Allele Advantage hypothesis, we show that functional aspects of allelic diversity help to elucidate the evolution of polymorphic genes involved in Red Queen co-evolution
    • 

    corecore