223 research outputs found

    Scaling full seismic waveform inversions

    Get PDF
    The main goal of this research study is to scale full seismic waveform inversions using the adjoint-state method to the data volumes that are nowadays available in seismology. Practical issues hinder the routine application of this, to a certain extent theoretically well understood, method. To a large part this comes down to outdated or flat out missing tools and ways to automate the highly iterative procedure in a reliable way. This thesis tackles these issues in three successive stages. It first introduces a modern and properly designed data processing framework sitting at the very core of all the consecutive developments. The ObsPy toolkit is a Python library providing a bridge for seismology into the scientific Python ecosystem and bestowing seismologists with effortless I/O and a powerful signal processing library, amongst other things. The following chapter deals with a framework designed to handle the specific data management and organization issues arising in full seismic waveform inversions, the Large-scale Seismic Inversion Framework. It has been created to orchestrate the various pieces of data accruing in the course of an iterative waveform inversion. Then, the Adaptable Seismic Data Format, a new, self-describing, and scalable data format for seismology is introduced along with the rationale why it is needed for full waveform inversions in particular and seismology in general. Finally, these developments are put into service to construct a novel full seismic waveform inversion model for elastic subsurface structure beneath the North American continent and the Northern Atlantic well into Europe. The spectral element method is used for the forward and adjoint simulations coupled with windowed time-frequency phase misfit measurements. Later iterations use 72 events, all happening after the USArray project has commenced, resulting in approximately 150`000 three components recordings that are inverted for. 20 L-BFGS iterations yield a model that can produce complete seismograms at a period range between 30 and 120 seconds while comparing favorably to observed data

    Development and application of the phase-screen seismic modelling code

    Get PDF
    As a consequence of the aims of this project, this thesis is divided into two distinct sections. Initially, the computationally efficient phase-screen forward modelling technique is extended to allow investigation of non-normal ray paths. The code is developed to accommodate all diffracted and converted phases up to critical angle, building on a geometrical construction method previously developed with a narrow-angle approximation. The new approach relies upon pre-scanning the model space to assess the complexity of each screen. The propagating wavefields are then divided as a function of horizontal wavenumber, and each subset is transformed to the spatial domain separately, carrying with it angular information. This allows both locally accurate 3D phase corrections and Zoeppritz reflection and transmission coefficients to be applied. The phase-screen code is further developed to handle simple anisotropic media. During phase-screen modelling, propagation is undertaken in the wavenumber domain where exact expressions for anisotropic phase velocities are incorporated. Extensive testing of the enhanced phase-screen technique includes simple analytical models to justify the inclusion of multiple energy alongside synthetic examples from models commonly used to test numerical modelling techniques. Additionally the code is tested with real models from a producing field in a marine sedimentary location where an exhaustive range of geophysical techniques were used to constrain the VTI parameters. Secondly within this thesis, the narrow angle version of the phase-screen method is used to generate a comprehensive pre-stack seismic reflection dataset for our industrial partners. Current exploration within the European oil and gas community is heavily focused on regions where the targets for production are positioned beneath plateau basalts oh the north west European margin. These environments produce a complex seismic response due to the scattering generated by the internal composition of the basalt flows. This study generates a large subsurface volume, derived from geological mapping projects in the Hold-with-Hope region of north east Greenland, and synthetically acquires a realistic 3-D reflection study across it. The basalt is uniquely generated as a single random volume with distinct correlation lengths in each orthogonal direction and a novel approach to determine seismic attenuation through basalts is developed. Initial results from this data set are presented after careful optimisation of the modelling code and parameters

    Least-Squares Wavelet Analysis and Its Applications in Geodesy and Geophysics

    Get PDF
    The Least-Squares Spectral Analysis (LSSA) is a robust method of analyzing unequally spaced and non-stationary data/time series. Although this method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, its spectrum still shows spectral leakage: power/energy leaks from one spectral peak into another. An iterative method called AntiLeakage Least-Squares Spectral Analysis (ALLSSA) is developed to attenuate the spectral leakages in the spectrum and consequently is used to regularize data series. In this study, the ALLSSA is applied to regularize and attenuate random noise in seismic data down to a certain desired level. The ALLSSA is subsequently extended to multichannel, heterogeneous and coarsely sampled seismic and related gradient measurements intended for geophysical exploration applications that require regularized (equally spaced) data free from aliasing effects. A new and robust method of analyzing unequally spaced and non-stationary time/data series is rigorously developed. This method, namely, the Least-Squares Wavelet Analysis (LSWA), is a natural extension of the LSSA that decomposes a time series into the time-frequency domain and obtains its spectrogram. It is shown through many synthetic and experimental time/data series that the LSWA supersedes all state-of-the-art spectral analyses methods currently available, without making any assumptions about or preprocessing (editing) the time series, or even applying any empirical methods that aim to adapt a time series to the analysis method. The LSWA can analyze any non-stationary and unequally spaced time series with components of low or high amplitude and frequency variability over time, including datum shifts (offsets), trends, and constituents of known forms, and by taking into account the covariance matrix associated with the time series. The stochastic confidence level surface for the spectrogram is rigorously derived that identifies statistically significant peaks in the spectrogram at a certain confidence level; this supersedes the empirical cone of influence used in the most popular continuous wavelet transform. All current state-of-the-art cross-wavelet transforms and wavelet coherence analyses methods impose many stringent constraints on the properties of the time series under investigation, requiring, more often than not, preprocessing of the raw measurements that may distort their content. These methods cannot generally be used to analyze unequally spaced and non-stationary time series or even two equally spaced time series of different sampling rates, with trends and/or datum shifts, and with associated covariance matrices. To overcome the stringent requirements of these methods, a new method is developed, namely, the Least-Squares Cross-Wavelet Analysis (LSCWA), along with its statistical distribution that requires no assumptions on the series under investigation. Numerous synthetic and geoscience examples establish the LSCWA as the method of methods for rigorous coherence analysis of any experimental series

    Scaling full seismic waveform inversions

    Get PDF
    The main goal of this research study is to scale full seismic waveform inversions using the adjoint-state method to the data volumes that are nowadays available in seismology. Practical issues hinder the routine application of this, to a certain extent theoretically well understood, method. To a large part this comes down to outdated or flat out missing tools and ways to automate the highly iterative procedure in a reliable way. This thesis tackles these issues in three successive stages. It first introduces a modern and properly designed data processing framework sitting at the very core of all the consecutive developments. The ObsPy toolkit is a Python library providing a bridge for seismology into the scientific Python ecosystem and bestowing seismologists with effortless I/O and a powerful signal processing library, amongst other things. The following chapter deals with a framework designed to handle the specific data management and organization issues arising in full seismic waveform inversions, the Large-scale Seismic Inversion Framework. It has been created to orchestrate the various pieces of data accruing in the course of an iterative waveform inversion. Then, the Adaptable Seismic Data Format, a new, self-describing, and scalable data format for seismology is introduced along with the rationale why it is needed for full waveform inversions in particular and seismology in general. Finally, these developments are put into service to construct a novel full seismic waveform inversion model for elastic subsurface structure beneath the North American continent and the Northern Atlantic well into Europe. The spectral element method is used for the forward and adjoint simulations coupled with windowed time-frequency phase misfit measurements. Later iterations use 72 events, all happening after the USArray project has commenced, resulting in approximately 150`000 three components recordings that are inverted for. 20 L-BFGS iterations yield a model that can produce complete seismograms at a period range between 30 and 120 seconds while comparing favorably to observed data

    Seismic Waves

    Get PDF
    The importance of seismic wave research lies not only in our ability to understand and predict earthquakes and tsunamis, it also reveals information on the Earth's composition and features in much the same way as it led to the discovery of Mohorovicic's discontinuity. As our theoretical understanding of the physics behind seismic waves has grown, physical and numerical modeling have greatly advanced and now augment applied seismology for better prediction and engineering practices. This has led to some novel applications such as using artificially-induced shocks for exploration of the Earth's subsurface and seismic stimulation for increasing the productivity of oil wells. This book demonstrates the latest techniques and advances in seismic wave analysis from theoretical approach, data acquisition and interpretation, to analyses and numerical simulations, as well as research applications. A review process was conducted in cooperation with sincere support by Drs. Hiroshi Takenaka, Yoshio Murai, Jun Matsushima, and Genti Toyokuni

    Applications of genetic algorithms to problems in seismic anisotropy

    Get PDF

    Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems

    Full text link
    Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    Advanced Techniques for Ground Penetrating Radar Imaging

    Get PDF
    Ground penetrating radar (GPR) has become one of the key technologies in subsurface sensing and, in general, in non-destructive testing (NDT), since it is able to detect both metallic and nonmetallic targets. GPR for NDT has been successfully introduced in a wide range of sectors, such as mining and geology, glaciology, civil engineering and civil works, archaeology, and security and defense. In recent decades, improvements in georeferencing and positioning systems have enabled the introduction of synthetic aperture radar (SAR) techniques in GPR systems, yielding GPR–SAR systems capable of providing high-resolution microwave images. In parallel, the radiofrequency front-end of GPR systems has been optimized in terms of compactness (e.g., smaller Tx/Rx antennas) and cost. These advances, combined with improvements in autonomous platforms, such as unmanned terrestrial and aerial vehicles, have fostered new fields of application for GPR, where fast and reliable detection capabilities are demanded. In addition, processing techniques have been improved, taking advantage of the research conducted in related fields like inverse scattering and imaging. As a result, novel and robust algorithms have been developed for clutter reduction, automatic target recognition, and efficient processing of large sets of measurements to enable real-time imaging, among others. This Special Issue provides an overview of the state of the art in GPR imaging, focusing on the latest advances from both hardware and software perspectives

    Seismic interferometry and non-linear tomography

    Get PDF
    Seismic records contain information that allows geoscientists to make inferences about the structure and properties of the Earth’s interior. Traditionally, seismic imaging and tomography methods require wavefields to be generated and recorded by identifiable sources and receivers, and use these directly-recorded signals to create models of the Earth’s subsurface. However, in recent years the method of seismic interferometry has revolutionised earthquake seismology by allowing unrecorded signals between pairs of receivers, pairs of sources, and source-receiver pairs to be constructed as Green’s functions using either cross-correlation, convolution or deconvolution of wavefields. In all of these formulations, seismic energy is recorded and emitted by surrounding boundaries of receivers and sources, which need not be active and impulsive but may even constitute continuous, naturally-occurring seismic ambient noise. In the first part of this thesis, I provide a comprehensive overview of seismic interferometry, its background theory, and examples of its application. I then test the theory and evaluate the effects of approximations that are commonly made when the interferometric formulae are applied to real datasets. Since errors resulting from some approximations can be subtle, these tests must be performed using almost error-free synthetic data produced with an exact waveform modelling method. To make such tests challenging the method and associated code must be applicable to multiply-scattering media. I developed such a modelling code specifically for interferometric tests and applications. Since virtually no errors are introduced into the results from modelling, any difference between the true and interferometric waveforms can safely be attributed to specific origins in interferometric theory. I show that this is not possible when using other, previously available methods: for example, the errors introduced into waveforms synthesised by finite-difference methods due to the modelling method itself, are larger than the errors incurred due to some (still significant) interferometric approximations; hence that modelling method can not be used to test these commonly-applied approximations. I then discuss the ability of interferometry to redatum seismic energy in both space and time, allowing virtual seismograms to be constructed at new locations where receivers may not have been present at the time of occurrence of the associated seismic source. I present the first successful application of this method to real datasets at multiple length scales. Although the results are restricted to limited bandwidths, this study demonstrates that the technique is a powerful tool in seismologists’ arsenal, paving the way for a new type of ‘retrospective’ seismology where sensors may be installed at any desired location at any time, and recordings of seismic events occurring at any other time can be constructed retrospectively – even long after their energy has dissipated. Within crustal seismology, a very common application of seismic interferometry is ambient-noise tomography (ANT). ANT is an Earth imaging method which makes use of inter-station Green’s functions constructed from cross-correlation of seismic ambient noise records. It is particularly useful in seismically quiescent areas where traditional tomography methods that rely on local earthquake sources would fail to produce interpretable results due to the lack of available data. Once constructed, interferometric Green’s functions can be analysed using standard waveform analysis techniques, and inverted for subsurface structure using more or less traditional imaging methods. In the second part of this thesis, I discuss the development and implementation of a fully non-linear inversion method which I use to perform Love-wave ANT across the British Isles. Full non-linearity is achieved by allowing both raypaths and model parametrisation to vary freely during inversion in Bayesian, Markov chain Monte Carlo tomography, the first time that this has been attempted. Since the inversion produces not only one, but a large ensemble of models, all of which fit the data to within the noise level, statistical moments of different order such as the mean or average model, or the standard deviation of seismic velocity structures across the ensemble, may be calculated: while the ensemble average map provides a smooth representation of the velocity field, a measure of model uncertainty can be obtained from the standard deviation map. In a number of real-data and synthetic examples, I show that the combination of variable raypaths and model parametrisation is key to the emergence of previously-unobserved, loop-like uncertainty topologies in the standard deviation maps. These uncertainty loops surround low- or high-velocity anomalies. They indicate that, while the velocity of each anomaly may be fairly well reconstructed, its exact location and size tend to remain uncertain; loops parametrise this location uncertainty, and hence constitute a fully non-linearised, Bayesian measure of spatial resolution. The uncertainty in anomaly location is shown to be due mainly to the location of the raypaths that were used to constrain the anomaly also only being known approximately. The emergence of loops is therefore related to the variation in raypaths with velocity structure, and hence to 2nd and higher order wave-physics. Thus, loops can only be observed using non-linear inversion methods such as the one described herein, explaining why these topologies have never been observed previously. I then present the results of fully non-linearised Love-wave group-velocity tomography of the British Isles in different frequency bands. At all of the analysed periods, the group-velocity maps show a good correlation with known geology of the region, and also robustly detect novel features. The shear-velocity structure with depth across the Irish Sea sedimentary basin is then investigated by inverting the Love-wave group-velocity maps, again fully non-linearly using Markov chain Monte Carlo inversion, showing an approximate depth to basement of 5 km. Finally, I discuss the advantages and current limitations of the fully non-linear tomography method implemented in this project, and provide guidelines and suggestions for its improvement
    corecore