1,638 research outputs found

    Earthquake Cycle Modelling of Multi-segmented Faults: Dynamic Rupture and Ground Motion Simulation of the 1992 M_w 7.3 Landers Earthquake

    Get PDF
    We perform earthquake cycle simulations with the goal of studying the characteristics of source scaling relations and strong ground motions in multi-segmented fault ruptures. The 1992 M_w 7.3 Landers earthquake is chosen as a target earthquake to validate our methodology. The model includes the fault geometry for the three-segmented Landers rupture from the SCEC community fault model, extended at both ends to a total length of 200 km, and limited to a depth to 15 km. We assume the faults are governed by rate-and-state (RS) friction, with a heterogeneous, correlated spatial distribution of characteristic weakening distance Dc. Multiple earthquake cycles on this non-planar fault system are modeled with a quasi-dynamic solver based on the boundary element method, substantially accelerated by implementing a hierarchical-matrix method. The resulting seismic ruptures are recomputed using a fully-dynamic solver based on the spectral element method, with the same RS friction law. The simulated earthquakes nucleate on different sections of the fault, and include events similar to the M_w 7.3 Landers earthquake. We obtain slip velocity functions, rupture times and magnitudes that can be compared to seismological observations. The simulated ground motions are validated by comparison of simulated and recorded response spectra

    Earthquake Cycle Modelling of Multi-segmented Faults: Dynamic Rupture and Ground Motion Simulation of the 1992 M_w 7.3 Landers Earthquake

    Get PDF
    We perform earthquake cycle simulations with the goal of studying the characteristics of source scaling relations and strong ground motions in multi-segmented fault ruptures. The 1992 M_w 7.3 Landers earthquake is chosen as a target earthquake to validate our methodology. The model includes the fault geometry for the three-segmented Landers rupture from the SCEC community fault model, extended at both ends to a total length of 200 km, and limited to a depth to 15 km. We assume the faults are governed by rate-and-state (RS) friction, with a heterogeneous, correlated spatial distribution of characteristic weakening distance Dc. Multiple earthquake cycles on this non-planar fault system are modeled with a quasi-dynamic solver based on the boundary element method, substantially accelerated by implementing a hierarchical-matrix method. The resulting seismic ruptures are recomputed using a fully-dynamic solver based on the spectral element method, with the same RS friction law. The simulated earthquakes nucleate on different sections of the fault, and include events similar to the M_w 7.3 Landers earthquake. We obtain slip velocity functions, rupture times and magnitudes that can be compared to seismological observations. The simulated ground motions are validated by comparison of simulated and recorded response spectra

    Comparison of two time-marching schemes for dynamic rupture simulation with a space-domain BIEM

    Get PDF
    The boundary integral equation method (BIEM) is one of the important numerical techniques used to simulate geophysical phenomena including dynamic propagation, nucleation, and sequence of earthquake ruptures. We studied the stability and convergence of two time-marching schemes numerically for 2-D problems in Mode I, II, and III conditions. One was a conventional method based on piecewise-constant spatiotemporal distribution of the rate of displacement gap V (CM), and the other was a slightly modified scheme from a predictor–corrector method previously applied to a spectral BIEM (NL). In the stability analysis, we simulated behavior of a traction-free fault under uncorrelated random distributions of initial traction. The growth rate of the perturbation is negative in a parameter regime of complex shape with CM, which has two numerical parameters, and the intersection for all the modes is very restricted as reported previously. In contrast, NL has only one parameter and yields simpler and a wide parameter regime of stability, conceivably allowing more flexible meshing on the fault. In the convergence analysis in which a smooth problem was solved, CM resulted in a numerical error scaled as Δx1 while NL led to the scaling of Δx2 typically or of Δx1.5 under certain conditions in Mode II problems. NL requires negligible additional computational costs and modification of the code is quite straightforward relative to CM. Therefore, we conclude that NL is a useful time-marching scheme that has wide applicability in simulations of earthquake ruptures although the reason for the rather complicated convergence behavior and verification of the findings here to more general conditions deserve further study

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    Effect of Mixed Precision Computing on H-Matrix Vector Multiplication in BEM Analysis

    Full text link
    Hierarchical Matrix (H-matrix) is an approximation technique which splits a target dense matrix into multiple submatrices, and where a selected portion of submatrices are low-rank approximated. The technique substantially reduces both time and space complexity of dense matrix vector multiplication, and hence has been applied to numerous practical problems. In this paper, we aim to accelerate the H-matrix vector multiplication by introducing mixed precision computing, where we employ both binary64 (FP64) and binary32 (FP32) arithmetic operations. We propose three methods to introduce mixed precision computing to H-matrix vector multiplication, and then evaluate them in a boundary element method (BEM) analysis. The numerical tests examine the effects of mixed precision computing, particularly on the required simulation time and rate of convergence of the iterative (BiCG-STAB) linear solver. We confirm the effectiveness of the proposed methods.Comment: Accepted manuscript to International Conference on High Performance Computing in Asia-Pacific Region (HPCAsia2020), January 15--17, 2020, Fukuoka, Japa

    Machine Learning-Based Data and Model Driven Bayesian Uncertanity Quantification of Inverse Problems for Suspended Non-structural System

    Get PDF
    Inverse problems involve extracting the internal structure of a physical system from noisy measurement data. In many fields, the Bayesian inference is used to address the ill-conditioned nature of the inverse problem by incorporating prior information through an initial distribution. In the nonparametric Bayesian framework, surrogate models such as Gaussian Processes or Deep Neural Networks are used as flexible and effective probabilistic modeling tools to overcome the high-dimensional curse and reduce computational costs. In practical systems and computer models, uncertainties can be addressed through parameter calibration, sensitivity analysis, and uncertainty quantification, leading to improved reliability and robustness of decision and control strategies based on simulation or prediction results. However, in the surrogate model, preventing overfitting and incorporating reasonable prior knowledge of embedded physics and models is a challenge. Suspended Nonstructural Systems (SNS) pose a significant challenge in the inverse problem. Research on their seismic performance and mechanical models, particularly in the inverse problem and uncertainty quantification, is still lacking. To address this, the author conducts full-scale shaking table dynamic experiments and monotonic & cyclic tests, and simulations of different types of SNS to investigate mechanical behaviors. To quantify the uncertainty of the inverse problem, the author proposes a new framework that adopts machine learning-based data and model driven stochastic Gaussian process model calibration to quantify the uncertainty via a new black box variational inference that accounts for geometric complexity measure, Minimum Description length (MDL), through Bayesian inference. It is validated in the SNS and yields optimal generalizability and computational scalability

    Data-Adaptive Wavelets and Multi-Scale Singular Spectrum Analysis

    Full text link
    Using multi-scale ideas from wavelet analysis, we extend singular-spectrum analysis (SSA) to the study of nonstationary time series of length NN whose intermittency can give rise to the divergence of their variance. SSA relies on the construction of the lag-covariance matrix C on M lagged copies of the time series over a fixed window width W to detect the regular part of the variability in that window in terms of the minimal number of oscillatory components; here W = M Dt, with Dt the time step. The proposed multi-scale SSA is a local SSA analysis within a moving window of width M <= W <= N. Multi-scale SSA varies W, while keeping a fixed W/M ratio, and uses the eigenvectors of the corresponding lag-covariance matrix C_M as a data-adaptive wavelets; successive eigenvectors of C_M correspond approximately to successive derivatives of the first mother wavelet in standard wavelet analysis. Multi-scale SSA thus solves objectively the delicate problem of optimizing the analyzing wavelet in the time-frequency domain, by a suitable localization of the signal's covariance matrix. We present several examples of application to synthetic signals with fractal or power-law behavior which mimic selected features of certain climatic and geophysical time series. A real application is to the Southern Oscillation index (SOI) monthly values for 1933-1996. Our methodology highlights an abrupt periodicity shift in the SOI near 1960. This abrupt shift between 4 and 3 years supports the Devil's staircase scenario for the El Nino/Southern Oscillation phenomenon.Comment: 24 pages, 19 figure
    corecore