2,305 research outputs found

    Classical and all-floating FETI methods for the simulation of arterial tissues

    Full text link
    High-resolution and anatomically realistic computer models of biological soft tissues play a significant role in the understanding of the function of cardiovascular components in health and disease. However, the computational effort to handle fine grids to resolve the geometries as well as sophisticated tissue models is very challenging. One possibility to derive a strongly scalable parallel solution algorithm is to consider finite element tearing and interconnecting (FETI) methods. In this study we propose and investigate the application of FETI methods to simulate the elastic behavior of biological soft tissues. As one particular example we choose the artery which is - as most other biological tissues - characterized by anisotropic and nonlinear material properties. We compare two specific approaches of FETI methods, classical and all-floating, and investigate the numerical behavior of different preconditioning techniques. In comparison to classical FETI, the all-floating approach has not only advantages concerning the implementation but in many cases also concerning the convergence of the global iterative solution method. This behavior is illustrated with numerical examples. We present results of linear elastic simulations to show convergence rates, as expected from the theory, and results from the more sophisticated nonlinear case where we apply a well-known anisotropic model to the realistic geometry of an artery. Although the FETI methods have a great applicability on artery simulations we will also discuss some limitations concerning the dependence on material parameters.Comment: 29 page

    On 3-D inelastic analysis methods for hot section components (base program)

    Get PDF
    A 3-D Inelastic Analysis Method program is described. This program consists of a series of new computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of: (1) combustor liners, (2) turbine blades, and (3) turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain)and global (dynamics, buckling) structural behavior of the three selected components. Three computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (Marc-Hot Section Technology), and BEST (Boundary Element Stress Technology), have been developed and are briefly described in this report

    Patch-wise Quadrature of Trimmed Surfaces in Isogeometric Analysis

    Full text link
    This work presents an efficient quadrature rule for shell analysis fully integrated in CAD by means of Isogeometric Analysis (IGA). General CAD-models may consist of trimmed parts such as holes, intersections, cut-offs etc. Therefore, IGA should be able to deal with these models in order to fulfil its promise of closing the gap between design and analysis. Trimming operations violate the tensor-product structure of the used Non-Uniform Rational B-spline (NURBS) basis functions and of typical quadrature rules. Existing efficient patch-wise quadrature rules consider actual knot vectors and are determined in 1D. They are extended to further dimensions by means of a tensor-product. Therefore, they are not directly applicable to trimmed structures. The herein proposed method extends patch-wise quadrature rules to trimmed surfaces. Thereby, the number of quadrature points can be signifficantly reduced. Geometrically linear and non-linear benchmarks of plane, plate and shell structures are investigated. The results are compared to a standard trimming procedure and a good performance is observed

    A statistical approach for fracture property realization and macroscopic failure analysis of brittle materials

    Get PDF
    Lacking the energy dissipative mechanics such as plastic deformation to rebalance localized stresses, similar to their ductile counterparts, brittle material fracture mechanics is associated with catastrophic failure of purely brittle and quasi-brittle materials at immeasurable and measurable deformation scales respectively. This failure, in the form macroscale sharp cracks, is highly dependent on the composition of the material microstructure. Further, the complexity of this relationship and the resulting crack patterns is exacerbated under highly dynamic loading conditions. A robust brittle material model must account for the multiscale inhomogeneity as well as the probabilistic distribution of the constituents which cause material heterogeneity and influence the complex mechanisms of dynamic fracture responses of the material. Continuum-based homogenization is carried out via finite element-based micromechanical analysis of a material neighbor which gives is geometrically described as a sampling windows (i.e., statistical volume elements). These volume elements are well-defined such that they are representative of the material while propagating material randomness from the inherent microscale defects. Homogenization yields spatially defined elastic and fracture related effective properties, utilized to statistically characterize the material in terms of these properties. This spatial characterization is made possible by performing homogenization at prescribed spatial locations which collectively comprise a non-uniform spatial grid which allows the mapping of each effective material properties to an associated spatial location. Through stochastic decomposition of the derived empirical covariance of the sampled effective material property, the Karhunen-Loeve method is used to generate realizations of a continuous and spatially-correlated random field approximation that preserve the statistics of the material from which it is derived. Aspects of modeling both isotropic and anisotropic brittle materials, from a statistical viewpoint, are investigated to determine how each influences the macroscale fracture response of these materials under highly dynamic conditions. The effects of modeling a material both explicitly by representations of discrete multiscale constituents and/or implicitly by continuum representation of material properties is studies to determine how each model influences the resulting material fracture response. For the implicit material representations, both a statistical white noise (i.e., Weibull-based spatially-uncorrelated) and colored noise (i.e., Karhunen-Loeve spatially-correlated model) random fields are employed herein

    Asynchronous Teams and Tasks in a Message Passing Environment

    Get PDF
    As the discipline of scientific computing grows, so too does the "skills gap" between the increasingly complex scientific applications and the efficient algorithms required. Increasing demand for computational power on the march towards exascale requires innovative approaches. Closing the skills gap avoids the many pitfalls that lead to poor utilisation of resources and wasted investment. This thesis tackles two challenges: asynchronous algorithms for parallel computing and fault tolerance. First I present a novel asynchronous task invocation methodology for Discontinuous Galerkin codes called enclave tasking. The approach modifies the parallel ordering of tasks that allows for efficient scaling on dynamic meshes up to 756 cores. It ensures high levels of concurrency and intermixes tasks of different computational properties. Critical tasks along domain boundaries are prioritised for an overlap of computation and communication. The second contribution is the teaMPI library, forming teams of MPI processes exchanging consistency data through an asynchronous "heartbeat". In contrast to previous approaches, teaMPI operates fully asynchronously with reduced overhead. It is also capable of detecting individually slow or failing ranks and inconsistent data among replicas. Finally I provide an outlook into how asynchronous teams using enclave tasking can be combined into an advanced team-based diffusive load balancing scheme. Both concepts are integrated into and contribute towards the ExaHyPE project, a next generation code that solves hyperbolic equation systems on dynamically adaptive cartesian grids

    Communication-Avoiding Algorithms for a High-Performance Hyperbolic PDE Engine

    Get PDF
    The study of waves has always been an important subject of research. Earthquakes, for example, have a direct impact on the daily lives of millions of people while gravitational waves reveal insight into the composition and history of the Universe. These physical phenomena, despite being tackled traditionally by different fields of physics, have in common that they are modelled the same way mathematically: as a system of hyperbolic partial differential equations (PDEs). The ExaHyPE project (“An Exascale Hyperbolic PDE Engine") translates this similarity into a software engine that can be quickly adapted to simulate a wide range of hyperbolic partial differential equations. ExaHyPE’s key idea is that the user only specifies the physics while the engine takes care of the parallelisation and the interplay of the underlying numerical methods. Consequently, a first simulation code for a new hyperbolic PDE can often be realised within a few hours. This is a task that traditionally can take weeks, months, even years for researchers starting from scratch. My main contribution to ExaHyPE is the development of the core infrastructure. This comprises the development and implementation of ExaHyPE’s solvers and adaptive mesh refinement procedures, it’s MPI+X parallelisation as well as high-level aspects of ExaHyPE’s application-tailored code generation, which allows to adapt ExaHyPE to model many different hyperbolic PDE systems. Like any high-performance computing code, ExaHyPE has to tackle the challenges of the coming exascale computing era, notably network communication latencies and the growing memory wall. In this thesis, I propose memory-efficient realisations of ExaHyPE’s solvers that avoid data movement together with a novel task-based MPI+X parallelisation concept that allows to hide network communication behind computation in dynamically adaptive simulations

    Parallel Multiscale Contact Dynamics for Rigid Non-spherical Bodies

    Get PDF
    The simulation of large numbers of rigid bodies of non-analytical shapes or vastly varying sizes which collide with each other is computationally challenging. The fundamental problem is the identification of all contact points between all particles at every time step. In the Discrete Element Method (DEM), this is particularly difficult for particles of arbitrary geometry that exhibit sharp features (e.g. rock granulates). While most codes avoid non-spherical or non-analytical shapes due to the computational complexity, we introduce an iterative-based contact detection method for triangulated geometries. The new method is an improvement over a naive brute force approach which checks all possible geometric constellations of contact and thus exhibits a lot of execution branching. Our iterative approach has limited branching and high floating point operations per processed byte. It thus is suitable for modern Single Instruction Multiple Data (SIMD) CPU hardware. As only the naive brute force approach is robust and always yields a correct solution, we propose a hybrid solution that combines the best of the two worlds to produce fast and robust contacts. In terms of the DEM workflow, we furthermore propose a multilevel tree-based data structure strategy that holds all particles in the domain on multiple scales in grids. Grids reduce the total computational complexity of the simulation. The data structure is combined with the DEM phases to form a single touch tree-based traversal that identifies both contact points between particle pairs and introduces concurrency to the system during particle comparisons in one multiscale grid sweep. Finally, a reluctant adaptivity variant is introduced which enables us to realise an improved time stepping scheme with larger time steps than standard adaptivity while we still minimise the grid administration overhead. Four different parallelisation strategies that exploit multicore architectures are discussed for the triad of methodological ingredients. Each parallelisation scheme exhibits unique behaviour depending on the grid and particle geometry at hand. The fusion of them into a task-based parallelisation workflow yields promising speedups. Our work shows that new computer architecture can push the boundary of DEM computability but this is only possible if the right data structures and algorithms are chosen
    • …
    corecore