2 research outputs found

    Automated optimization of discrete event simulations without knowing the model

    No full text
    Modeling and simulation is an essential element in the research and development of new concepts and products in any domain. The demand for the development of more and more complex systems drives the complexity of the simulation models as well. If such simulations are not executed efficiently, overly long execution times hamper conduction of necessary experiments. We identified two major types of resources that must be used efficiently to reduce simulation runtime. On the one hand, multiple computing instances (e. g., CPU cores) can be used to distribute the workload and perform independent computations simultaneously. On the other hand, workload stemming from redundant computations can be avoided altogether by exploiting the presence of unused main memory to store intermediate results. We observe that typically in the development cycle of products and simulation models neither time and resources nor required expertise is available to apply sophisticated runtime optimization manually. We conclude that it is of utmost importance to investigate approaches to speed up simulation automatically. The most prevalent challenge of automating optimization is that, at the time of researching and developing the acceleration concepts and tools, the model is not yet available, and we need to assume that the model will not be implemented for a specific optimization technique. Hence, our methodologies need to be devised without the model at hand, which can only be used by the finally implemented optimization tool at its runtime. In this thesis, we investigate how computer simulations can be automatically accelerated using each of the two optimization potentials mentioned above (multiple computing instances or available memory) without the model being provided at the time of researching the concepts and developing the tools. For the utilization of multiple computing instances we devise methodologies to automatically derive data-dependencies directly from the model implementation itself, enabling to discover more independent, hence parallelizable, work items. We investigate the capabilities of doing so either statically—while compiling the model implementation—or dynamically—while the simulation runs. The gathered knowledge can be used not only in conservatively synchronized parallel simulations but also enables to dynamically switch to more optimistic paradigms. For the utilization of available memory we explore the opportunity to avoid redundant computations altogether. We base this on the observation that such redundancies occur frequently in many simulations and simulation parameter studies. Our approach to automatically avoid redundant computations operates on almost arbitrary input code, hence being generally applicable, especially in the modeling and simulation domain. By combining the two optimization vectors we unleash the full power of both at the same time. We demonstrate that our approaches are able to accelerate the simulations in a parameter study by a factor of more than 600 such that the entire study—including fixed setup and teardown efforts—can execute more than 200 times faster than without optimization. We conclude that the methodologies discussed in this thesis demonstrate the potential to speed up simulations without prior knowledge about the model to optimize

    Automated optimization of discrete event simulations without knowing the model

    No full text
    Modeling and simulation is an essential element in the research and development of new concepts and products in any domain. The demand for the development of more and more complex systems drives the complexity of the simulation models as well. If such simulations are not executed efficiently, overly long execution times hamper conduction of necessary experiments. We identified two major types of resources that must be used efficiently to reduce simulation runtime. On the one hand, multiple computing instances (e. g., CPU cores) can be used to distribute the workload and perform independent computations simultaneously. On the other hand, workload stemming from redundant computations can be avoided altogether by exploiting the presence of unused main memory to store intermediate results. We observe that typically in the development cycle of products and simulation models neither time and resources nor required expertise is available to apply sophisticated runtime optimization manually. We conclude that it is of utmost importance to investigate approaches to speed up simulation automatically. The most prevalent challenge of automating optimization is that, at the time of researching and developing the acceleration concepts and tools, the model is not yet available, and we need to assume that the model will not be implemented for a specific optimization technique. Hence, our methodologies need to be devised without the model at hand, which can only be used by the finally implemented optimization tool at its runtime. In this thesis, we investigate how computer simulations can be automatically accelerated using each of the two optimization potentials mentioned above (multiple computing instances or available memory) without the model being provided at the time of researching the concepts and developing the tools. For the utilization of multiple computing instances we devise methodologies to automatically derive data-dependencies directly from the model implementation itself, enabling to discover more independent, hence parallelizable, work items. We investigate the capabilities of doing so either statically—while compiling the model implementation—or dynamically—while the simulation runs. The gathered knowledge can be used not only in conservatively synchronized parallel simulations but also enables to dynamically switch to more optimistic paradigms. For the utilization of available memory we explore the opportunity to avoid redundant computations altogether. We base this on the observation that such redundancies occur frequently in many simulations and simulation parameter studies. Our approach to automatically avoid redundant computations operates on almost arbitrary input code, hence being generally applicable, especially in the modeling and simulation domain. By combining the two optimization vectors we unleash the full power of both at the same time. We demonstrate that our approaches are able to accelerate the simulations in a parameter study by a factor of more than 600 such that the entire study—including fixed setup and teardown efforts—can execute more than 200 times faster than without optimization. We conclude that the methodologies discussed in this thesis demonstrate the potential to speed up simulations without prior knowledge about the model to optimize
    corecore