21 research outputs found

    A Unified Framework for Parallel Anisotropic Mesh Adaptation

    Get PDF
    Finite-element methods are a critical component of the design and analysis procedures of many (bio-)engineering applications. Mesh adaptation is one of the most crucial components since it discretizes the physics of the application at a relatively low cost to the solver. Highly scalable parallel mesh adaptation methods for High-Performance Computing (HPC) are essential to meet the ever-growing demand for higher fidelity simulations. Moreover, the continuous growth of the complexity of the HPC systems requires a systematic approach to exploit their full potential. Anisotropic mesh adaptation captures features of the solution at multiple scales while, minimizing the required number of elements. However, it also introduces new challenges on top of mesh generation. Also, the increased complexity of the targeted cases requires departing from traditional surface-constrained approaches to utilizing CAD (Computer-Aided Design) kernels. Alongside the functionality requirements, is the need of taking advantage of the ubiquitous multi-core machines. More importantly, the parallel implementation needs to handle the ever-increasing complexity of the mesh adaptation code. In this work, we develop a parallel mesh adaptation method that utilizes a metric-based approach for generating anisotropic meshes. Moreover, we enhance our method by interfacing with a CAD kernel, thus enabling its use on complex geometries. We evaluate our method both with fixed-resolution benchmarks and within a simulation pipeline, where the resolution of the discretization increases incrementally. With the Telescopic Approach for scalable mesh generation as a guide, we propose a parallel method at the node (multi-core) for mesh adaptation that is expected to scale up efficiently to the upcoming exascale machines. To facilitate an effective implementation, we introduce an abstract layer between the application and the runtime system that enables the use of task-based parallelism for concurrent mesh operations. Our evaluation indicates results comparable to state-of-the-art methods for fixed-resolution meshes both in terms of performance and quality. The integration with an adaptive pipeline offers promising results for the capability of the proposed method to function as part of an adaptive simulation. Moreover, our abstract tasking layer allows the separation of different aspects of the implementation without any impact on the functionality of the method

    An adaptive space-time boundary element method for impulsive wave propagation in elastodynamics

    Get PDF
    Wave propagation in natural or man-made bodies is an important problem in civil engineering, electronic engineering and ocean engineering etc. Common examples of wave problems include earthquake wave modeling, ocean wave modeling, soil- structure interaction, geological prospecting, and acoustic or radio wave diffraction. The Boundary Element Method (BEM) is a widely-used numerical method to solve such problems in both science and engineering fields. However, conventional BEM modeling of wave problems encounters many difficulties. Firstly, the method is expensive since influence matrices are computed at each time step and BEM solutions at every former time step have to be stored. Secondly, if large time steps are used, inaccuracies arise in BEM solutions; but if small time steps are used, computational costs become impractical. Thirdly, the dimensionless space-time ratio must be limited to a narrow range to produce a stable solution. In this thesis, we attack these problems by introducing adaptive schemes and mesh refinement. Instead of using uniform meshes and uniform time steps, error indicators are employed to locate high-gradient areas; then mesh refinement in space-time is used to improve the resolution in those areas only. Another strategy is to introduce the space-time concept to track moving wave fronts. In wave problems, wave fronts move in space-time, and high gradients arise both in space and in time. It is thus inadequate to refine the mesh in space only because there are high gradients in time as well. Hence, besides a locally mesh refinement scheme employed in space, local time stepping is also used to improve the accuracy and efficiency of the algorithm. This adaptive scheme is implemented in the C language and used to solve scalar and electrodynamic 2D and 3D wave propagation problems in a open and closed field. Gradient-based and resolution-based error indicators are employed to locate these moving high-gradient areas. A space mesh refinement scheme and the local time stepping is used to refine the area to achieve higher accuracy. The adaptive BEM solver is 1.4 ~ 1.8 times faster than the conventional BEM solver. It is also more stable than the conventional BEM. We also parallelize the BEM solver to further improve its efficiency. Compared with the non-parallel code, using a 8-processor Linux cluster, a speed-up factor of four is achieved. This suggests that substantial further gains can be obtained if a larger parallel computer is available. July 11. 2007

    The Fully Nonlocal, Finite-Temperature, Adaptive 3D Quasicontinuum Method for Bridging Across Scales

    Get PDF
    Computational modeling of metallic materials across various length and time scales has been on the rise since the advent of efficient, fast computing machines. From atomistic methods like molecular statics and dynamics at the nanoscale to continuum mechanics modeled by finite element methods at the macroscale, various techniques have been established that describe and predict the mechanics of materials. Many recent technologies, however, fall into a gap between length scales (referred to as mesoscales), with microstructural features on the order of nanometers (thereby requiring full atomistic resolution) but large representative volumes on the order of micrometers (beyond the scope of molecular dynamics). There is an urgent need to predict material behavior using scale-bridging techniques that build up from the atomic level and reach larger length and time scales. To this end, there is extensive ongoing research in building hierarchical and concurrent scale-bridging techniques to master the gap between atomistics and the continuum, but robust, adaptive schemes with finite-temperature modeling at realistic length and time scales are still missing. In this thesis, we use the quasicontinuum (QC) method, a concurrent scale-bridging technique that extends atomistic accuracy to significantly larger length scales by reducing the full atomic ensemble to a small set of representative atoms, and using interpolation to recover the motion of all lattice sites where full atomistic resolution is not necessary. We develop automatic model adaptivity by adding mesh refinement and adaptive neighborhood updates to the new fully nonlocal energy-based 3D QC framework, which allows for automatic resolution to full atomistics around regions of interest such as nanovoids and moving lattice defects. By comparison to molecular dynamics (MD), we show that these additions allow for a successful and computationally efficient coarse graining of atomistic ensembles while maintaining the same atomistic accuracy. We further extend the fully nonlocal QC formulation to finite temperature (termed hotQC) using the principle of maximum entropy in statistical mechanics and averaging the thermal motion of atoms to obtain a temperature-dependent free energy using numerical quadrature. This hotQC formulation implements recently developed optimal summation rules and successfully captures temperature-dependent elastic constants and thermal expansion. We report for the first time the influence of temperature on force artifacts and conclude that our novel finite-temperature adaptive nonlocal QC shows minimal force artifacts and outperforms existing formulations. We also highlight the influence of quadrature in phase space on simulation outcomes. We study 3D grain boundaries in the nonlocal hotQC framework (previously limited to single-crystals) by modeling coarse-grained symmetric-tilt grain boundaries in coincidence site lattice (CSL) based bicrystals. We predict relaxed energy states of various Σ-boundaries with reasonable accuracy by comparing grain boundary energies to MD simulations and outline a framework to model polycrystalline materials that surpasses both spatial and temporal limitations of traditional MD.</p

    High resolution simulations of the long-term evolution of jets from young stellar objects using parallel algorithms

    Get PDF
    Outflows and jets are an integral part of the formation of young stars and are found to be commonplace in all regions where star formation is known to occur. There has been much work done in the development of computational fluid dynamic methods for the simulation of these outflows in an attempt to gam a greater insight into the processes taking place in their formation. Observational data presents key characteristics of such outflows that can be used to determine the validity of any computational model. Here, we have developed a sophisticated parallehsation method for the splittingup of a jet simulation across a Beowulf type computer cluster using a message-passing method. The parallelised code allows us to run simulations for much longer and on larger domains than was possible with the original serial code. This allows us to investigate the development of some important characteristics of the computational model over large time-scales with a suitably high resolution. In particular we investigate the behaviour of the mass-velocity and mtensity-velocity relationships for molecular outflows driven by a prompt-entramment type jet model. Up to now simulations have indicated good agreement between these characteristics for this model and the observed behaviour of these relationships. However, the short time-scales used did not allow for an evolutionary study of the relationships and as a result long-term simulations are deemed necessary

    Bench-Ranking: ettekirjutav analüüsimeetod suurte teadmiste graafide päringutele

    Get PDF
    Relatsiooniliste suurandmete (BD) töötlemisraamistike kasutamine suurte teadmiste graafide töötlemiseks kätkeb endas võimalust päringu jõudlust optimeerimida. Kaasaegsed BD-süsteemid on samas keerulised andmesüsteemid, mille konfiguratsioonid omavad olulist mõju jõudlusele. Erinevate raamistike ja konfiguratsioonide võrdlusuuringud pakuvad kogukonnale parimaid tavasid parema jõudluse saavutamiseks. Enamik neist võrdlusuuringutest saab liigitada siiski vaid kirjeldavaks ja diagnostiliseks analüütikaks. Lisaks puudub ühtne standard nende uuringute võrdlemiseks kvantitatiivselt järjestatud kujul. Veelgi enam, suurte graafide töötlemiseks vajalike konveierite kavandamine eeldab täiendavaid disainiotsuseid mis tulenevad mitteloomulikust (relatsioonilisest) graafi töötlemise paradigmast. Taolisi disainiotsuseid ei saa automaatselt langetada, nt relatsiooniskeemi, partitsioonitehnika ja salvestusvormingute valikut. Käesolevas töös käsitleme kuidas me antud uurimuslünga täidame. Esmalt näitame disainiotsuste kompromisside mõju BD-süsteemide jõudluse korratavusele suurte teadmiste graafide päringute tegemisel. Lisaks näitame BD-raamistike jõudluse kirjeldavate ja diagnostiliste analüüside piiranguid suurte graafide päringute tegemisel. Seejärel uurime, kuidas lubada ettekirjutavat analüütikat järjestamisfunktsioonide ja mitmemõõtmeliste optimeerimistehnikate (nn "Bench-Ranking") kaudu. See lähenemine peidab kirjeldava tulemusanalüüsi keerukuse, suunates praktiku otse teostatavate teadlike otsusteni.Leveraging relational Big Data (BD) processing frameworks to process large knowledge graphs yields a great interest in optimizing query performance. Modern BD systems are yet complicated data systems, where the configurations notably affect the performance. Benchmarking different frameworks and configurations provides the community with best practices for better performance. However, most of these benchmarking efforts are classified as descriptive and diagnostic analytics. Moreover, there is no standard for comparing these benchmarks based on quantitative ranking techniques. Moreover, designing mature pipelines for processing big graphs entails considering additional design decisions that emerge with the non-native (relational) graph processing paradigm. Those design decisions cannot be decided automatically, e.g., the choice of the relational schema, partitioning technique, and storage formats. Thus, in this thesis, we discuss how our work fills this timely research gap. Particularly, we first show the impact of those design decisions’ trade-offs on the BD systems’ performance replicability when querying large knowledge graphs. Moreover, we showed the limitations of the descriptive and diagnostic analyses of BD frameworks’ performance for querying large graphs. Thus, we investigate how to enable prescriptive analytics via ranking functions and Multi-Dimensional optimization techniques (called ”Bench-Ranking”). This approach abstracts out from the complexity of descriptive performance analysis, guiding the practitioner directly to actionable informed decisions.https://www.ester.ee/record=b553332

    Subject index volumes 1–92

    Get PDF

    Discontinuous Galerkin Method Applied to Navier-Stokes Equations

    Get PDF
    Discontinuous Galerkin (DG) finite element methods are becoming important techniques for the computational solution of many real-world problems describe by differential equations. They combine many attractive features of the finite element and the finite volume methods. These methods have been successfully applied to many important PDEs arising from a wide range of applications. DG methods are highly accurate numerical methods and have considerable advantages over the classical numerical methods available in the literature. DG methods can easily handle meshes with hanging nodes, elements of various types and shapes, and local spaces of different orders. Furthermore, DG methods provide accurate and efficient simulation of physical and engineering problems, especially in settings where the solutions exhibit poor regularity. For these reasons, they have attracted the attention of many researchers working in diverse areas, from computational fluid dynamics, solid mechanics and optimal control, to finance, biology and geology. In this talk, we give an overview of the main features of DG methods and their extensions. We first introduce the DG method for solving classical differential equations. Then, we extend the methods to other equations such as Navier-Stokes equations. The Navier-Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing

    Ionization Feedback in Massive Star Formation

    Get PDF
    Understanding the origin of high-mass stars is central to modern astrophysics. We shed light on this problem using novel radiation-hydrodynamic simulations that consistently follow the gravitational collapse of a massive molecular cloud, the subsequent build-up and fragmentation of the accretion disk surrounding the nascent star, and, for the first time, the interaction between its intense UV radiation field and the infalling material. We show that ionization feedback can neither stop protostellar mass growth nor suppress fragmentation. We present a consistent picture of the formation and evolution of H II regions that explains the observed morphology, time variability, and ages of ultracompact H II regions, solving the long-standing lifetime problem
    corecore