39 research outputs found

    Verification of an ADER-DG method for complex dynamic rupture problems

    Get PDF
    We present results of thorough benchmarking of an arbitrary high-order derivative discontinuous Galerkin (ADER-DG) method on unstructured meshes for advanced earthquake dynamic rupture problems. We verify the method by comparison to well-established numerical methods in a series of verification exercises, including dipping and branching fault geometries, heterogeneous initial conditions, bimaterial interfaces and several rate-and-state friction laws. We show that the combination of meshing flexibility and high-order accuracy of the ADER-DG method makes it a competitive tool to study earthquake dynamics in geometrically complicated setups

    Scaling and Resilience in Numerical Algorithms for Exascale Computing

    Get PDF
    The first Petascale supercomputer, the IBM Roadrunner, went online in 2008. Ten years later, the community is now looking ahead to a new generation of Exascale machines. During the decade that has passed, several hundred Petascale capable machines have been installed worldwide, yet despite the abundance of machines, applications that scale to their full size remain rare. Large clusters now routinely have 50.000+ cores, some have several million. This extreme level of parallelism, that has allowed a theoretical compute capacity in excess of a million billion operations per second, turns out to be difficult to use in many applications of practical interest. Processors often end up spending more time waiting for synchronization, communication, and other coordinating operations to complete, rather than actually computing. Component reliability is another challenge facing HPC developers. If even a single processor fail, among many thousands, the user is forced to restart traditional applications, wasting valuable compute time. These issues collectively manifest themselves as low parallel efficiency, resulting in waste of energy and computational resources. Future performance improvements are expected to continue to come in large part due to increased parallelism. One may therefore speculate that the difficulties currently faced, when scaling applications to Petascale machines, will progressively worsen, making it difficult for scientists to harness the full potential of Exascale computing. The thesis comprises two parts. Each part consists of several chapters discussing modifications of numerical algorithms to make them better suited for future Exascale machines. In the first part, the use of Parareal for Parallel-in-Time integration techniques for scalable numerical solution of partial differential equations is considered. We propose a new adaptive scheduler that optimize the parallel efficiency by minimizing the time-subdomain length without making communication of time-subdomains too costly. In conjunction with an appropriate preconditioner, we demonstrate that it is possible to obtain time-parallel speedup on the nonlinear shallow water equation, beyond what is possible using conventional spatial domain-decomposition techniques alone. The part is concluded with the proposal of a new method for constructing Parallel-in-Time integration schemes better suited for convection dominated problems. In the second part, new ways of mitigating the impact of hardware failures are developed and presented. The topic is introduced with the creation of a new fault-tolerant variant of Parareal. In the chapter that follows, a C++ Library for multi-level checkpointing is presented. The library uses lightweight in-memory checkpoints, protected trough the use of erasure codes, to mitigate the impact of failures by decreasing the overhead of checkpointing and minimizing the compute work lost. Erasure codes have the unfortunate property that if more data blocks are lost than parity codes created, the data is effectively considered unrecoverable. The final chapter contains a preliminary study on partial information recovery for incomplete checksums. Under the assumption that some meta knowledge exists on the structure of the data encoded, we show that the data lost may be recovered, at least partially. This result is of interest not only in HPC but also in data centers where erasure codes are widely used to protect data efficiently

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Modeling Megathrust Earthquakes Across Scales: One‐way Coupling From Geodynamics and Seismic Cycles to Dynamic Rupture

    Get PDF
    Taking the full complexity of subduction zones into account is important for realistic modeling and hazard assessment of subduction zone seismicity and associated tsunamis. Studying seismicity requires numerical methods that span a large range of spatial and temporal scales. We present the first coupled framework that resolves subduction dynamics over millions of years and earthquake dynamics down to fractions of a second. Using a two‐dimensional geodynamic seismic cycle (SC) model, we model 4 million years of subduction followed by cycles of spontaneous megathrust events. At the initiation of one such SC event, we export the self‐consistent fault and surface geometry, fault stress and strength, and heterogeneous material properties to a dynamic rupture (DR) model. Coupling leads to spontaneous dynamic rupture nucleation, propagation, and arrest with the same spatial characteristics as in the SC model. It also results in a similar material‐dependent stress drop, although dynamic slip is significantly larger. The DR event shows a high degree of complexity, featuring various rupture styles and speeds, precursory phases, and fault reactivation. Compared to a coupled model with homogeneous material properties, accounting for realistic lithological contrasts doubles the amount of maximum slip, introduces local pulse‐like rupture episodes, and relocates the peak slip from near the downdip limit of the seismogenic zone to the updip limit. When an SC splay fault is included in the DR model, the rupture prefers the splay over the shallow megathrust, although wave reflections do activate the megathrust afterward
    corecore