27 research outputs found

    Algebraic Multigrid for Meshfree Methods

    Get PDF
    This thesis deals with the development of a new Algebraic Multigrid method (AMG) for the solution of linear systems arising from Generalized Finite Difference Methods (GFDM). In particular, we consider the Finite Pointset Method, which is based on GFDM. Being a meshfree method, FPM does not rely on a mesh and can therefore deal with moving geometries and free surfaces is a natural way and it does not require the generation of a mesh before the actual simulation. In industrial use cases the size of the linear systems often becomes large, which means that classical linear solvers often become the bottleneck in terms of simulation run time, because their convergence rate depends on the discretization size. Multigrid methods have proven to be very efficient linear solvers in the domain of mesh-based methods. Their convergence is independent of the discretization size, yielding a run time that only scales linearly with the problem size. AMG methods are a natural candidate for the solution of the linear systems arising in the FPM, as this thesis will show. They need to be tuned to the specific characteristics of GFDM, though. The AMG methods that are developed in this thesis achieve a speed-up of up to 33x compared to the classical linear solvers and therefore allow much more accurate simulations in the future.Diese Dissertation beschäftigt sich mit der Entwicklung einer neuen Algebraischen Mehrgittermethode für die Lösung linearer Gleichungssysteme aus Generalisierten Finite Differenzen Methoden. Im Speziellen betrachten wir die sogenannte Finite Pointset Method, eine gitterfreie Lagrange Methode, welche auf Generalisierten Finite Differenzen Methoden basiert. Die Finite Pointset Method wurde insbesondere für Simulationen von Vorgängen mit freien Oberflächen und bewegten Geometrien entwickelt, bei denen der gitterfreie Charakter der Methode besonders große Vorteile liefert: An den freien Oberflächen und nahe der Geometrie muss zu keinem Zeitpunkt – auch nicht zu Beginn der Simulation – ein Gitter erstellt oder angepasst werden. Dies ist ein großer Vorteil gegenüber klassischen gitterbasierten Methoden. Wie in gitterbasierten Methoden entstehen auch in der Finite Pointset Method und anderen Generalisierten Finite Differenzen Methoden große, dünn besetze lineare Gleichungssysteme. Das Lösen dieser Gleichungssysteme wird bei fein aufgelösten Simulationen, wie sie in der Industrie oft nötig sind, schnell zum zeitlichen Flaschenhals der Gesamtsimulation. Ohne eine geeignete Methode zur Lösung dieser Gleichungssysteme dauern Simulationen oft sehr lange oder sind praktisch nicht durchführbar. Auch kann es vorkommen, dass klassische Lösungsverfahren divergieren und die Simulation damit unmöglich wird. Im Kontext von gitterbasierten Methoden sind Mehrgittermethoden ein etabliertes Werkzeug, um die entstehenden linearen Gleichungssysteme effizient und robust zu lösen. Besonders hervorzuheben ist dabei die lineare Skalierbarkeit dieser Methoden in der Größe der Matrix. Damit eignen sie sich besonders für fein aufgelöste Simulationen. Algebraische Mehrgittermethoden sind natürliche Kandidaten für die Lösung der Gleichungssysteme aus Generalisierten Finite Differenzen Methoden, wie diese Dissertation zeigen wird. Außerdem entwickeln wir eine neue Algebraische Mehrgittermethode, die auf den Einsatz in der Finite Pointset Method zugeschnitten ist und die Besonderheiten dieser Methode beachtet. Dazu zählen die Eigenschaften der einzelnen Matrizen, die wir ebenfalls analysieren werden, und auch die Veränderung der Matrizen über mehrere Zeitschritte hinweg, die im Vergleich mit gitterbasierten Verfahren eine größere Schwierigkeit darstellt. Wir evaluieren unsere neue Methode anhand von akademischen und realen Beispielen, sowohl mit nur einem Prozess als auch mit mehreren (MPI-)Prozessen. Die hier neu entwickelte Algebraische Mehrgittermethode ist um ein Vielfaches schneller als klassische Verfahren zur Lösung linearer Gleichungssysteme und erlaubt damit neue, genauere Simulationen mit gitterfreien Methoden

    Algorithms for Triangles, Cones & Peaks

    Get PDF
    Three different geometric objects are at the center of this dissertation: triangles, cones and peaks. In computational geometry, triangles are the most basic shape for planar subdivisions. Particularly, Delaunay triangulations are a widely used for manifold applications in engineering, geographic information systems, telecommunication networks, etc. We present two novel parallel algorithms to construct the Delaunay triangulation of a given point set. Yao graphs are geometric spanners that connect each point of a given set to its nearest neighbor in each of kk cones drawn around it. They are used to aid the construction of Euclidean minimum spanning trees or in wireless networks for topology control and routing. We present the first implementation of an optimal O(nlogn)\mathcal{O}(n \log n)-time sweepline algorithm to construct Yao graphs. One metric to quantify the importance of a mountain peak is its isolation. Isolation measures the distance between a peak and the closest point of higher elevation. Computing this metric from high-resolution digital elevation models (DEMs) requires efficient algorithms. We present a novel sweep-plane algorithm that can calculate the isolation of all peaks on Earth in mere minutes

    Real-Time High-Quality Image to Mesh Conversion for Finite Element Simulations

    Get PDF
    Technological Advances in Medical Imaging have enabled the acquisition of images accurately describing biological tissues. Finite Element (FE) methods on these images provide the means to simulate biological phenomena such as brain shift registration, respiratory organ motion, blood flow pressure in vessels, etc. FE methods require the domain of tissues be discretized by simpler geometric elements, such as triangles in two dimensions, tetrahedra in three, and pentatopes in four. This exact discretization is called a mesh . The accuracy and speed of FE methods depend on the quality and fidelity of the mesh used to describe the biological object. Elements with bad quality introduce numerical errors and slower solver convergence. Also, analysis based on poor fidelity meshes do not yield accurate results specially near the surface. In this dissertation, we present the theory and the implementation of both a sequential and a parallel Delaunay meshing technique for 3D and ---for the first time--- 4D space-time domains. Our method provably guarantees that the mesh is a faithful representation of the multi-tissue domain in topological and geometric sense. Moreover, we show that our method generates graded elements of bounded radius-edge and aspect ratio, which renders our technique suitable for Finite Element analysis. A notable feature of our implementation is speed and scalability. The single-threaded performance of our 3D code is faster than the state of the art open source meshing tools. Experimental evaluation shows a more than 82% weak scaling efficiency for up to 144 cores, reaching a rate of more than 14.3 million elements per second. This is the first 3D parallel Delaunay refinement method to achieve such a performance, on either distributed or shared-memory architectures. Lastly, this dissertation is the first to develop and examine the sequential and parallel high-quality and fidelity meshing of general space-time 4D multi-tissue domains

    Application of general semi-infinite Programming to Lapidary Cutting Problems

    Get PDF
    We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented

    Big Data Computing for Geospatial Applications

    Get PDF
    The convergence of big data and geospatial computing has brought forth challenges and opportunities to Geographic Information Science with regard to geospatial data management, processing, analysis, modeling, and visualization. This book highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates opportunities for using big data for geospatial applications. Crucial to the advancements highlighted in this book is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms

    The Construction of Optimized High-Order Surface Meshes by Energy-Minimization

    Get PDF
    Despite the increasing popularity of high-order methods in computational fluid dynamics, their application to practical problems still remains challenging. In order to exploit the advantages of high-order methods with geometrically complex computational domains, coarse curved meshes are necessary, i.e. high-order representations of the geometry. This dissertation presents a strategy for the generation of curved high-order surface meshes. The mesh generation method combines least-squares fitting with energy functionals, which approximate physical bending and stretching energies, in an incremental energy-minimizing fitting strategy. Since the energy weighting is reduced in each increment, the resulting surface representation features high accuracy. Nevertheless, the beneficial influence of the energy-minimization is retained. The presented method aims at enabling the utilization of the superior convergence properties of high-order methods by facilitating the construction of coarser meshes, while ensuring accuracy by allowing an arbitrary choice of geometric approximation order. Results show surface meshes of remarkable quality, even for very coarse meshes representing complex domains, e.g. blood vessels

    Proceedings of the 8th Python in Science conference

    Get PDF
    International audienceThe SciPy conference provides a unique opportunity to learn and affect what is happening in the realm of scientific computing with Python. Attendees have the opportunity to review the available tools and how they apply to specific problems. By providing a forum for developers to share their Python expertise with the wider commercial, academic, and research communities, this conference fosters collaboration and facilitates the sharing of software components, techniques and a vision for high level language use in scientific computing

    Efficient Knowledge Extraction from Structured Data

    Get PDF
    Knowledge extraction from structured data aims for identifying valid, novel, potentially useful, and ultimately understandable patterns in the data. The core step of this process is the application of a data mining algorithm in order to produce an enumeration of particular patterns and relationships in large databases. Clustering is one of the major data mining tasks and aims at grouping the data objects into meaningful classes (clusters) such that the similarity of objects within clusters is maximized, and the similarity of objects from different clusters is minimized. In this thesis, we advance the state-of-the-art data mining algorithms for analyzing structured data types. We describe the development of innovative solutions for hierarchical data mining. The EM-based hierarchical clustering method ITCH (Information-Theoretic Cluster Hierarchies) is designed to propose solid solutions for four different challenges. (1) to guide the hierarchical clustering algorithm to identify only meaningful and valid clusters. (2) to represent each cluster content in the hierarchy by an intuitive description with e.g. a probability density function. (3) to consistently handle outliers. (4) to avoid difficult parameter settings. ITCH is built on a hierarchical variant of the information-theoretic principle of Minimum Description Length (MDL). Interpreting the hierarchical cluster structure as a statistical model of the dataset, it can be used for effective data compression by Huffman coding. Thus, the achievable compression rate induces a natural objective function for clustering, which automatically satisfies all four above mentioned goals. The genetic-based hierarchical clustering algorithm GACH (Genetic Algorithm for finding Cluster Hierarchies) overcomes the problem of getting stuck in a local optimum by a beneficial combination of genetic algorithms, information theory and model-based clustering. Besides hierarchical data mining, we also made contributions to more complex data structures, namely objects that consist of mixed type attributes and skyline objects. The algorithm INTEGRATE performs integrative mining of heterogeneous data, which is one of the major challenges in the next decade, by a unified view on numerical and categorical information in clustering. Once more, supported by the MDL principle, INTEGRATE guarantees the usability on real world data. For skyline objects we developed SkyDist, a similarity measure for comparing different skyline objects, which is therefore a first step towards performing data mining on this kind of data structure. Applied in a recommender system, for example SkyDist can be used for pointing the user to alternative car types, exhibiting a similar price/mileage behavior like in his original query. For mining graph-structured data, we developed different approaches that have the ability to detect patterns in static as well as in dynamic networks. We confirmed the practical feasibility of our novel approaches on large real-world case studies ranging from medical brain data to biological yeast networks. In the second part of this thesis, we focused on boosting the knowledge extraction process. We achieved this objective by an intelligent adoption of Graphics Processing Units (GPUs). The GPUs have evolved from simple devices for the display signal preparation into powerful coprocessors that do not only support typical computer graphics tasks but can also be used for general numeric and symbolic computations. As major advantage, GPUs provide extreme parallelism combined with a high bandwidth in memory transfer at low cost. In this thesis, we propose algorithms for computationally expensive data mining tasks like similarity search and different clustering paradigms which are designed for the highly parallel environment of a GPU, called CUDA-DClust and CUDA-k-means. We define a multi-dimensional index structure which is particularly suited to support similarity queries under the restricted programming model of a GPU. We demonstrate the superiority of our algorithms running on GPU over their conventional counterparts on CPU in terms of efficiency

    Systems and control : 21th Benelux meeting, 2002, March 19-21, Veldhoven, The Netherlands

    Get PDF
    Book of abstract
    corecore