22 research outputs found

    Enabling Distributed Simulation of OMNeT++ INET Models

    Get PDF
    Parallel and distributed simulation have been extensively researched for a long time. Nevertheless, many simulation models are still executed sequentially. We attribute this to the fact that many of those models are simply not capable of being executed in parallel since they violate particular constraints. In this paper, we analyze the INET model suite, which enables network simulation in OMNeT++, with regard to parallelizability. We uncovered several issues preventing parallel execution of INET models. We analyzed those issues and developed solutions allowing INET models to be run in parallel. A case study shows the feasibility of our approach. Though there are parts of the model suite that we didn't investigate yet and the performance can still be improved, the results show parallelization speedup for most configurations. The source code of our implementation is available through our web site at code.comsys.rwth-aachen.de.Comment: Published in: A. F\"orster, C. Sommer, T. Steinbach, M. W\"ahlisch (Eds.), Proc. of 1st OMNeT++ Community Summit, Hamburg, Germany, September 2, 2014, arXiv:1409.0093, 201

    Automated optimization of discrete event simulations without knowing the model

    No full text
    Modeling and simulation is an essential element in the research and development of new concepts and products in any domain. The demand for the development of more and more complex systems drives the complexity of the simulation models as well. If such simulations are not executed efficiently, overly long execution times hamper conduction of necessary experiments. We identified two major types of resources that must be used efficiently to reduce simulation runtime. On the one hand, multiple computing instances (e. g., CPU cores) can be used to distribute the workload and perform independent computations simultaneously. On the other hand, workload stemming from redundant computations can be avoided altogether by exploiting the presence of unused main memory to store intermediate results. We observe that typically in the development cycle of products and simulation models neither time and resources nor required expertise is available to apply sophisticated runtime optimization manually. We conclude that it is of utmost importance to investigate approaches to speed up simulation automatically. The most prevalent challenge of automating optimization is that, at the time of researching and developing the acceleration concepts and tools, the model is not yet available, and we need to assume that the model will not be implemented for a specific optimization technique. Hence, our methodologies need to be devised without the model at hand, which can only be used by the finally implemented optimization tool at its runtime. In this thesis, we investigate how computer simulations can be automatically accelerated using each of the two optimization potentials mentioned above (multiple computing instances or available memory) without the model being provided at the time of researching the concepts and developing the tools. For the utilization of multiple computing instances we devise methodologies to automatically derive data-dependencies directly from the model implementation itself, enabling to discover more independent, hence parallelizable, work items. We investigate the capabilities of doing so either statically—while compiling the model implementation—or dynamically—while the simulation runs. The gathered knowledge can be used not only in conservatively synchronized parallel simulations but also enables to dynamically switch to more optimistic paradigms. For the utilization of available memory we explore the opportunity to avoid redundant computations altogether. We base this on the observation that such redundancies occur frequently in many simulations and simulation parameter studies. Our approach to automatically avoid redundant computations operates on almost arbitrary input code, hence being generally applicable, especially in the modeling and simulation domain. By combining the two optimization vectors we unleash the full power of both at the same time. We demonstrate that our approaches are able to accelerate the simulations in a parameter study by a factor of more than 600 such that the entire study—including fixed setup and teardown efforts—can execute more than 200 times faster than without optimization. We conclude that the methodologies discussed in this thesis demonstrate the potential to speed up simulations without prior knowledge about the model to optimize

    Automated optimization of discrete event simulations without knowing the model

    No full text
    Modeling and simulation is an essential element in the research and development of new concepts and products in any domain. The demand for the development of more and more complex systems drives the complexity of the simulation models as well. If such simulations are not executed efficiently, overly long execution times hamper conduction of necessary experiments. We identified two major types of resources that must be used efficiently to reduce simulation runtime. On the one hand, multiple computing instances (e. g., CPU cores) can be used to distribute the workload and perform independent computations simultaneously. On the other hand, workload stemming from redundant computations can be avoided altogether by exploiting the presence of unused main memory to store intermediate results. We observe that typically in the development cycle of products and simulation models neither time and resources nor required expertise is available to apply sophisticated runtime optimization manually. We conclude that it is of utmost importance to investigate approaches to speed up simulation automatically. The most prevalent challenge of automating optimization is that, at the time of researching and developing the acceleration concepts and tools, the model is not yet available, and we need to assume that the model will not be implemented for a specific optimization technique. Hence, our methodologies need to be devised without the model at hand, which can only be used by the finally implemented optimization tool at its runtime. In this thesis, we investigate how computer simulations can be automatically accelerated using each of the two optimization potentials mentioned above (multiple computing instances or available memory) without the model being provided at the time of researching the concepts and developing the tools. For the utilization of multiple computing instances we devise methodologies to automatically derive data-dependencies directly from the model implementation itself, enabling to discover more independent, hence parallelizable, work items. We investigate the capabilities of doing so either statically—while compiling the model implementation—or dynamically—while the simulation runs. The gathered knowledge can be used not only in conservatively synchronized parallel simulations but also enables to dynamically switch to more optimistic paradigms. For the utilization of available memory we explore the opportunity to avoid redundant computations altogether. We base this on the observation that such redundancies occur frequently in many simulations and simulation parameter studies. Our approach to automatically avoid redundant computations operates on almost arbitrary input code, hence being generally applicable, especially in the modeling and simulation domain. By combining the two optimization vectors we unleash the full power of both at the same time. We demonstrate that our approaches are able to accelerate the simulations in a parameter study by a factor of more than 600 such that the entire study—including fixed setup and teardown efforts—can execute more than 200 times faster than without optimization. We conclude that the methodologies discussed in this thesis demonstrate the potential to speed up simulations without prior knowledge about the model to optimize

    Comparing the ns–3 Propagation Models

    No full text

    Runtime Efficient Event Scheduling in Multi-threaded Network Simulation

    No full text
    Developing an ecient parallel simulation framework for multiprocessor systems is hard. A primary concern is the considerable amount of parallelization overhead imposed on the event handling routines of the simulator. Besides complex event scheduling algorithms, the main sources of overhead are thread synchronization and locking of shared data. Thus, compared to sequential simulation, the overhead of parallelization may easily outweigh its performance benets. We introduce two ecient event handling schemes based on our parallel-simulation extension Horizon for OMNeT++. First, we present a push-based event handling scheme to minimize the overhead of thread synchronization and locking. Second, we complement this scheme with a novel event scheduling algorithm that signicantly reduces the overhead of parallel event scheduling. Lastly, we prove the correctness of the scheduling algorithm. Our evaluation reveals a total reduction of the event handling overhead of up to 16x.QC 20140102</p

    Spectrum Aware Virtual Coordinates Assignment and Routing in Multihop Cognitive Networks

    No full text
    We propose Spectrum Aware Virtual Coordinate (SAViC) for multi hop cognitive radio network (CRN) to facilitate geographic routing. The proposed virtual coordinates (VC) of any two secondary users reflect both geographic distance and opportunistic spectrum availability between them. As a result, geographic routing is able to detour the area affected by licensed users or cut through the area with more available spectrum. According to different spectrum occupation patterns of primary user, two versions of SAViC are designed based on the channel utility and primary user’s sojourning time respectively. Simulation shows the proposed virtual coordinate facilitates geographic routing to achieve high success rate of path construction. When duty cycle on the licensed channel is heterogeneous in the network, channel utility based virtual coordinate supports geographic routing to outperform a state-of-the-art geographic routing protocol by 40% on packet delivery ratio. When the channel utility is identical on each secondary node, and the sojourning time of primary users for secondary users are different from each other, SAViC based on primary user’s sojourning time achieves significantly shorter delay than other virtual coordinates.Qc 20150618</p
    corecore