310 research outputs found

    How do we understand and visualize uncertainty?

    Get PDF
    Geophysicists are often concerned with reconstructing subsurface properties using observations collected at or near the surface. For example, in seismic migration, we attempt to reconstruct subsurface geometry from surface seismic recordings, and in potential field inversion, observations are used to map electrical conductivity or density variations in geologic layers. The procedure of inferring information from indirect observations is called an inverse problem by mathematicians, and such problems are common in many areas of the physical sciences. The inverse problem of inferring the subsurface using surface observations has a corresponding forward problem, which consists of determining the data that would be recorded for a given subsurface configuration. In the seismic case, forward modeling involves a method for calculating a synthetic seismogram, for gravity data it consists of a computer code to compute gravity fields from an assumed subsurface density model. Note that forward modeling often involves assumptions about the appropriate physical relationship between unknowns (at depth) and observations on the surface, and all attempts to solve the problem at hand are limited by the accuracy of those assumptions. In the broadest sense then, exploration geophysicists have been engaged in inversion since the dawn of the profession and indeed algorithms often applied in processing centers can all be viewed as procedures to invert geophysical data

    Trans-dimensional inverse problems, model comparison and the evidence

    Get PDF
    In most geophysical inverse problems the properties of interest are parametrized using a fixed number of unknowns. In some cases arguments can be used to bound the maximum number of parameters that need to be considered. In others the number of unknowns is set at some arbitrary value and regularization is used to encourage simple, non-extravagant models. In recent times variable or self-adaptive parametrizations have gained in popularity. Rarely, however, is the number of unknowns itself directly treated as an unknown. This situation leads to a transdimensional inverse problem, that is, one where the dimension of the parameter space is a variable to be solved for. This paper discusses trans-dimensional inverse problems from the Bayesian viewpoint. A particular type of Markov chain Monte Carlo (MCMC) sampling algorithm is highlighted which allows probabilistic sampling in variable dimension spaces. A quantity termed the evidence or marginal likelihood plays a key role in this type of problem. It is shown that once evidence calculations are performed, the results of complex variable dimension sampling algorithms can be replicated with simple and more familiar fixed dimensional MCMC sampling techniques. Numerical examples are used to illustrate the main points. The evidence can be difficult to calculate, especially in high-dimensional non-linear inverse problems. Nevertheless some general strategies are discussed and analytical expressions given for certain linear problem

    Automatic differentiation in geophysical inverse problems

    Full text link
    Automatic differentiation (AD) is the technique whereby output variables of a computer code evaluating any complicated function (e.g. the solution to a differential equation) can be differentiated with respect to the input variables. Often AD tools take the form of source to source translators and produce computer code without the need for deriving and hand coding of explicit mathematical formulae by the user. The power of AD lies in the fact that it combines the generality of finite difference techniques and the accuracy and efficiency of analytical derivatives, while at the same time eliminating 'human' coding errors. It also provides the possibility of accurate, efficient derivative calculation from complex 'forward' codes where no analytical derivatives are possible and finite difference techniques are too cumbersome. AD is already having a major impact in areas such as optimization, meteorology and oceanography. Similarly it has considerable potential for use in non-linear inverse problems in geophysics where linearization is desirable, or for sensitivity analysis of large numerical simulation codes, for example, wave propagation and geodynamic modelling. At present, however, AD tools appear to be little used in the geosciences. Here we report on experiments using a state of the art AD tool to perform source to source code translation in a range of geoscience problems. These include calculating derivatives for Gibbs free energy minimization, seismic receiver function inversion, and seismic ray tracing. Issues of accuracy and efficiency are discussed

    A complex ray-tracing tool for high-frequency mean-field flow interaction effects in jets

    No full text
    This paper presents a complex ray-tracing tool for the calculation of high-frequency Green’s functions in 3D mean field jet flows. For a generic problem, the ray solution suffers from three main deficiencies: multiplicity of solutions, singularities at caustics, and the determining of complex solutions. The purpose of this paper is to generalize, combine and apply existing stationary media methods to moving media scenarios. Multiplicities are dealt with using an equivalent two-point boundary-value problem, whilst non-uniformities at caustics are corrected using diffraction catastrophes. Complex rays are found using a combination of imaginary perturbations, an assumption of caustic stability, and analytic continuation of the receiver curve. To demonstrate this method, the ray tool is compared against a high-frequency modal solution of Lilley’s equation for an off-axis point source. This solution is representative of high-frequency source positions in real jets and is rich in caustic structures. A full utilization of the ray tool is shown to provide excellent results<br/

    Non-Parametric Approximations for Anisotropy Estimation in Two-dimensional Differentiable Gaussian Random Fields

    Full text link
    Spatially referenced data often have autocovariance functions with elliptical isolevel contours, a property known as geometric anisotropy. The anisotropy parameters include the tilt of the ellipse (orientation angle) with respect to a reference axis and the aspect ratio of the principal correlation lengths. Since these parameters are unknown a priori, sample estimates are needed to define suitable spatial models for the interpolation of incomplete data. The distribution of the anisotropy statistics is determined by a non-Gaussian sampling joint probability density. By means of analytical calculations, we derive an explicit expression for the joint probability density function of the anisotropy statistics for Gaussian, stationary and differentiable random fields. Based on this expression, we obtain an approximate joint density which we use to formulate a statistical test for isotropy. The approximate joint density is independent of the autocovariance function and provides conservative probability and confidence regions for the anisotropy parameters. We validate the theoretical analysis by means of simulations using synthetic data, and we illustrate the detection of anisotropy changes with a case study involving background radiation exposure data. The approximate joint density provides (i) a stand-alone approximate estimate of the anisotropy statistics distribution (ii) informed initial values for maximum likelihood estimation, and (iii) a useful prior for Bayesian anisotropy inference.Comment: 39 pages; 8 figure

    Crustal constraint through complete model space screening for diverse geophysical datasets facilitated by emulation

    Get PDF
    Deep crustal constraint is often carried out using deterministic inverse methods, sometimes using seismic refraction, gravity and electromagnetic datasets in a complementary or “joint” scheme. With increasingly powerful parallel computer systems it is now possible to apply joint inversion schemes to derive an optimum model from diverse input data. These methods are highly effective where the uncertainty in the system is small. However, given the complex nature of these schemes it is often difficult to discern the uniqueness of the output model given the noise in the data, and the application of necessary regularization and weighting in the inversion process means that the extent of user prejudice pertaining to the final result may be unclear. We can rigorously address the subject of uncertainty using standard statistical tools but these methods also become less feasible if the prior model space is large or the forward simulations are computationally expensive. We present a simple Monte Carlo scheme to screen model space in a fully joint fashion, in which we replace the forward simulation with a fast and uncertainty-calibrated mathematical function, or emulator. This emulator is used as a proxy to run the very large number of models necessary to fully explore the plausible model space. We develop the method using a simple synthetic dataset then demonstrate its use on a joint data set comprising first-arrival seismic refraction, MT and scalar gravity data over a diapiric salt body. This study demonstrates both the value of a forward Monte Carlo approach (as distinct from a search-based or conventional inverse approach) in incorporating all kinds of uncertainty in the modelling process, exploring the entire model space, and shows the potential value of applying emulator technology throughout geophysics. Though the target here is relatively shallow, the methodology can be readily extended to address the whole crust

    Transdimensional inversion of receiver functions and surface wave dispersion

    No full text
    International audienceWe present a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation. This class of algorithm treats the number of model parameters (e.g. number of layers) as an unknown in the problem. The dimension of the model space is variable and a Markov chain Monte Carlo (McMC) scheme is used to provide a parsimonious solution that fully quantifies the degree of knowledge one has about seismic structure (i.e constraints on the model, resolution, and trade-offs). The level of data noise (i.e. the covariance matrix of data errors) effectively controls the information recoverable from the data and here it naturally determines the complexity of the model (i.e. the number of model parameters). However, it is often difficult to quantify the data noise appropriately, particularly in the case of seismic waveform inversion where data errors are correlated. Here we address the issue of noise estimation using an extended Hierarchical Bayesian formulation, which allows both the variance and covariance of data noise to be treated as unknowns in the inversion. In this way it is possible to let the data infer the appropriate level of data fit. In the context of joint inversions, assessment of uncertainty for different data types becomes crucial in the evaluation of the misfit function. We show that the Hierarchical Bayes procedure is a powerful tool in this situation, because it is able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors. After illustrating the method with synthetic tests, a real data application is shown where teleseismic receiver functions and ambient noise surface wave dispersion measurements from the WOMBAT array (South-East Australia) are jointly inverted to provide a probabilistic 1D model of shear-wave velocity beneath a given station

    The leading digit distribution of the worldwide Illicit Financial Flows

    Full text link
    Benford's law states that in data sets from different phenomena leading digits tend to be distributed logarithmically such that the numbers beginning with smaller digits occur more often than those with larger ones. Particularly, the law is known to hold for different types of financial data. The Illicit Financial Flows (IFFs) exiting the developing countries are frequently discussed as hidden resources which could have been otherwise properly utilized for their development. We investigate here the distribution of the leading digits in the recent data on estimates of IFFs to look for the existence of a pattern as predicted by Benford's law and establish that the frequency of occurrence of the leading digits in these estimates does closely follow the law.Comment: 13 pages, 10 figures, 6 tables, additional data analyi

    Application of Surface wave methods for seismic site characterization

    Get PDF
    Surface-wave dispersion analysis is widely used in geophysics to infer a shear wave velocity model of the subsoil for a wide variety of applications. A shear-wave velocity model is obtained from the solution of an inverse problem based on the surface wave dispersive propagation in vertically heterogeneous media. The analysis can be based either on active source measurements or on seismic noise recordings. This paper discusses the most typical choices for collection and interpretation of experimental data, providing a state of the art on the different steps involved in surface wave surveys. In particular, the different strategies for processing experimental data and to solve the inverse problem are presented, along with their advantages and disadvantages. Also, some issues related to the characteristics of passive surface wave data and their use in H/V spectral ratio technique are discussed as additional information to be used independently or in conjunction with dispersion analysis. Finally, some recommendations for the use of surface wave methods are presented, while also outlining future trends in the research of this topic
    • …
    corecore