3 research outputs found

    SimAnMo — A parallelized runtime model generator

    Get PDF
    In this article, we present the novel features of the recent version of SimAnMo, the Simulated Annealing Modeler. The tool creates models that correlate the size of one input parameter of an application to the corresponding runtime and thus SimAnMo allows predictions for larger input sizes. A focus lies on applications whose runtime grows exponentially in the input parameter size. Such programs are, for example, of high interest for cryptanalysis to analyze practical security of traditional and post‐quantum secure schemes. However, SimAnMo also generates reliable models for the widespread case of polynomial runtime behavior and also for the important case of factorial runtime increase. SimAnMo's model generation is based on a parallelized simulated annealing procedure and heuristically minimizes the costs of a model. Those may rely on different quality metrics. Insights into SimAnMo's software design and its usage are provided. We demonstrate the quality of SimAnMo's models for different algorithms from various application fields. We show that our approach also works well on ARM architectures

    Load-balancing for multi-physics simulations

    Get PDF
    In many real-world scenarios, multiple different kinds of physics appear together in the same system. In order to predict the behavior of such a system, they need to be combined into one multi-physics simulation. Simulations using partitioned coupling approaches have proved to be especially efficient concerning resource usage and development costs. They divide the simulation domain into distinct subdomains based on the occurring physics, and then solve them separately using single-physics solvers. This makes them suited for execution on modern supercomputers since they are able to profit off the massively available parallelism. Albeit only when the available cores are distributed in accordance with the load of the single-physics solvers. Otherwise, we face resource wastage and unnecessary increases in run-time. The most commonly used approach to this problem is to estimate the load of the single-physics solvers by comparing their degrees of freedom and then scaling the number of cores accordingly. This thesis proposes a new approach based on empirical performance analysis. By employing machine learning techniques, predictive models for the run-time of the single-physics solvers are created. Based on these models ideal core assignments are derived through solving of an integer optimization problem. To generate the models two approaches are considered: The first one creates several different regression models and then picks the best fitting one, whereas the second one uses neural networks to approximate the solver run-time. Both of them allow us to incorporate new parameters into the models in addition to the number of cores and degrees of freedom. This enables generalization to previously unseen parameter combinations, for example, new discretization levels. For a simple test case, the regression approach successfully predicts the solver run-time with high accuracy, leading to performance improvements of over 40% compared to the old load-balancing approach. When considering multiple parameters, the neural network approach generally outperforms the regression approach
    corecore