19 research outputs found

    Novel algorithms of greedy-type for probability density estimation as well as linear and nonlinear inverse problems

    Get PDF
    Algorithms of greedy-type are a popular tool for sparse approximation. Sparse approximations of functions are beneficial for several reasons. Therefore, we will develop greedy algorithms for two classes of problems, probability density estimation and inverse problems. The development of a greedy algorithm for density estimation was motivated by the desire to implement a simulation algorithm for so-called nonwovens, a particular type of technical textiles, which are widely used in industrial applications. We will propose such a simulation algorithm, which needs an estimation of the probability density of the fiber directions inside a nonwoven. Fortunately, these directions can be obtained from real nonwovens by a CT scan, which yields millions of data points. The incorporation of a probability density that is generated by the newly developed greedy algorithm reduces the computation time of the simulation algorithm from 80 days to 150 minutes by a factor of 750 in comparison to the use of a standard method for density estimation, namely kernel density estimators. For inverse problems, we introduce two generalizations of the Regularized Functional Matching Pursuit (RFMP) algorithm, which is a greedy algorithm for linear inverse problems. For the first generalization, called RWFMP, an improved theoretical analysis is possible. Furthermore, using the RWFMP, it is possible to reduce the computation time of the RFMP by a factor of 10 without losing much of the accuracy. The second generalization is an RFMP for nonlinear inverse problems. We apply the algorithm to the nonlinear inverse gravimetric problem, which is concerned with the reconstruction of information about the interior of a planetary body from gravitational data. We obtain very good numerical results concerning the accuracy, the sparsity, and the interpretability of the results.Zusammenfassung Greedy-Algorithmen sind oft genutzte Methoden zur Generierung von sogenannten sparsen Approximationen. Funktionen auf diese Art zu approximieren ist aus verschiedenen Gründen vorteilhaft. Deshalb entwickeln wir Greedy-Algorithmen für zwei verschiedene Problemklassen, die Schätzung von Wahrscheinlichkeitsdichten einerseits und inverse Probleme andererseits. Die Entwicklung eines Greedy-Algorithmus für die Dichteschätzung ist motiviert durch die Notwendigkeit, einen Simulationsalgorithmus für sogenannte Vliesstoffe zu implementieren, einem speziellen Typ technischer Textilien, die oft in industriellen Anwendungen verwendet werden. Wir werden solch einen Simulationsalgorithmus vorstellen, der eine Schätzung der Richtungsverteilung in einem Vliesstoff benötigt. Die Richtungen der Fäden in einem echten Vliesstoff können mit Computertomographen analysiert werden. Dies liefert Millionen von Datenpunkten. Benutzen wir dieWahrscheinlichkeitsdichte, die durch den neu entwickelten Greedy-Algorithmus generiert wird, so reduziert sich die Rechenzeit des Simulationsalgorithmus von 80 Tagen auf 150 Minuten um einen Faktor von 750 im Vergleich zur Verwendung von Kerndichteschätzern, einer Standardmethode für die Dichteschätzung. Für inverse Probleme entwickeln wir zwei Verallgemeinerungen des Regularized Functional Matching Pursuit (RFMP)-Algorithmus, welcher ein Greedy-Algorithmus für lineare inverse Probleme ist. Für die erste Verallgemeinerung, die wir RWFMP nennen, legen wir verbesserte theoretische Ergebnisse im Vergleich zum RFMP vor. Außerdem kann durch den RWFMP die Rechenzeit des RFMP auf ein Zehntel reduziert werden, ohne viel Genauigkeit zu verlieren. Die zweite Verallgemeinerung ist ein RFMP für nichtlineare inverse Probleme.Wir wenden diesen Algorithmus auf das nichtlineare inverse Gravimetrieproblem an, welches sich mit der Bestimmung von Strukturen im Innern eines Planeten aus Gravitationsdaten befasst. Wir erhalten sehr gute numerische Resultate, betreffend sowohl die Genauigkeit und die sparsity, als auch die Interpretierbarkeit des Ergebnisses

    Statistical Parameter Selection for Clustering Persistence Diagrams

    Get PDF
    International audienceIn urgent decision making applications, ensemble simulations are an important way to determine different outcome scenarios based on currently available data. In this paper, we will analyze the output of ensemble simulations by considering so-called persistence diagrams, which are reduced representations of the original data, motivated by the extraction of topological features. Based on a recently published progressive algorithm for the clustering of persistence diagrams, we determine the optimal number of clusters, and therefore the number of significantly different outcome scenarios, by the minimization of established statistical score functions. Furthermore, we present a proof-of-concept prototype implementation of the statistical selection of the number of clusters and provide the results of an experimental study, where this implementation has been applied to real-world ensemble data sets

    Supercomputing with MPI meets the Common Workflow Language standards: an experience report

    Get PDF
    Use of standards-based workflows is still somewhat unusual by high-performance computing users. In this paper we describe the experience of using the Common Workflow Language (CWL) standards to describe the execution, in parallel, of MPI-parallelised applications. In particular, we motivate and describe the simple extension to the specification which was required, as well as our implementation of this within the CWL reference runner. We discuss some of the unexpected benefits, such as simple use of HPC-oriented performance measurement tools, and CWL software requirements interfacing with HPC module systems. We close with a request for comment from the community on how these features could be adopted within versions of the CWL standards.Comment: Submitted to 15th Workshop on Workflows in Support of Large-Scale Science (WORKS20

    The role of interactive super-computing in using HPC for urgent decision making

    Get PDF
    Technological advances are creating exciting new opportunities that have the potential to move HPC well beyond traditional computational workloads. In this paper we focus on the potential for HPC to be instrumental in responding to disasters such as wildfires, hurricanes, extreme flooding, earthquakes, tsunamis, winter weather conditions, and accidents. Driven by the VESTEC EU funded H2020 project, our research looks to prove HPC as a tool not only capable of simulating disasters once they have happened, but also one which is able to operate in a responsive mode, supporting disaster response teams making urgent decisions in real-time. Whilst this has the potential to revolutionise disaster response, it requires the ability to drive HPC interactively, both from the user's perspective and also based upon the arrival of data. As such interactivity is a critical component in enabling HPC to be exploited in the role of supporting disaster response teams so that urgent decision makers can make the correct decision first time, every time

    A Bespoke Workflow Management System for Data-Driven Urgent HPC

    Get PDF
    In this paper we present a workflow management system which permits the kinds of data-driven workflows required by urgent computing, namely where new data is integrated into the workflow as a disaster progresses in order refine the predictions as time goes on. This allows the workflow toadapt to new data at runtime, a capability that most workflow management systems do not possess. The workflow management system was developed for the EU-funded VESTEC project, which aims to fuse HPC with real-time data for supporting urgent decision making. We first describe an example workflow from the VESTEC project, and show why existing workflow technologies do not meet the needs of the project. We then go on to present the design of our Workflow Management System, describe how it is implemented into the VESTEC system, and provide an example of the workflow system in use for a test case

    Automatic Differentiation in Multibody Helicopter Simulation

    Get PDF
    In a first approximation, helicopters can be modeled by open-loop multibody systems (MBS). For this type of MBS the joints' degrees of freedom provide a globally valid set of minimal states. We derive the equations of motion in these minimal coordinates and observe that one has to compute Jacobian matrices of the bodies' kinematics with respect to the minimal states. Classically, these Jacobians are derived analytically from a complicated composition of coordinate transformations. In this paper, we will present an alternative approach, where the arising Jacobians are computed by automatic differentiation (AD). This makes the implementation of a simulation code for open-loop MBS more efficient, less error-prone, and easier to extend. We also provide ideas, how to include flexible bodies and closed-loop parts in the MBS. To emphasize the applicability of our approach, we provide simulation results for rigid MBS helicopter models

    Mathematische Modelle in der Hubschraubersimulation

    Get PDF
    Die Simulation von Hubschraubern erfordert die Modellierung eines hochdynamischen Systems, dass in verschiedene Teilkomponenten zerlegt werden. Jedes dieser Teilmodelle erfordert eine sorgfältige mathematische Modellierung. Zusätzlich ist aus den zur Verfügung stehenden Methoden ein geeigneter Löser für die auftretenden differential-algebraischen Gleichungen auszuwählen. Dieser Vortrag gibt einen Überblick über die mathematischen Aspekte der Helikoptersimulation, über das DLR im Allgemeinen und über Software Engineering Tools, die am DLR verwendet werden

    The Combination of Real-Time Data, HPC, and Interactive Visualization in the VESTEC project

    Get PDF
    Traditionally, HPC has been used to simulate disastrous events such as wildfires, spread of diseases, or solar outbursts after the event, typically for post-disaster analysis. However, with the increasing availability of high-velocity sensor data and computational resources, as well as the development of elaborate in-situ data analytics and visualization techniques it is now possible to support urgent decision makers in real-time utilizing HPC infrastructure. In this talk, we will present the approaches that are employed in the H2020 FETHPC project VESTEC (Visual Exploration and Sampling Toolkit for Extreme Computing) to tackle the challenges that arise when combining real-time data with HPC infrastructure and interactive visualization, both regarding technology and policies. Whilst the challenges are significant, so are the potential benefits of overcoming them, not only to the HPC community but also to society as a whole
    corecore