79 research outputs found

    Genetic algorithm full-waveform inversion: uncertainty estimation and validation of the results

    Get PDF
    We cast the genetic algorithm-full waveform inversion (GA-FWI) in a probabilistic framework that through a multi-step procedure, allows us to estimate the posterior probability distribution (PPD) in model space. Since GA is not a Markov chain Monte Carlo method, it is necessary to refine the PPD estimated by GA (GA PPD) via a resampling of the model space with a Gibbs sampler (GS), thus obtaining the GA+GS PPDs. We apply this procedure to two acoustic 2D models, an inclusion model and the Marmousi model, and we find a good agreement between the derived PPDs and the varying resolution due to changes in the seismic illumination. Finally, we randomly extract several models from the so derived PPDs to start many local full-waveform inversions (LFWIs), which produce final high-resolution models. This set of models is then used to numerically estimate the final uncertainty (GA+GS+LFWI PPD). The multimodal and wide PPDs derived from the GA optimization, become unimodal and narrower after LFWI and, in the well illuminated parts of the subsurface, the final GA+GS+LFWI PPDs contain the true model parameters. This confirms the ability of the GA optimization in finding a velocity model suitable as input to LFWI

    Probabilistic inversions of electrical resistivity tomography data with a machine learning-based forward operator

    Get PDF
    Casting a geophysical inverse problem into a Bayesian setting is often discouraged by the computational workload needed to run many forward modeling evaluations. Here we present probabilistic inversions of electrical resistivity tomography data in which the forward operator is replaced by a trained residual neural network that learns the non-linear mapping between the resistivity model and the apparent resistivity values. The use of this specific architecture can provide some advantages over standard convolutional networks as it mitigates the vanishing gradient problem that might affect deep networks. The modeling error introduced by the network approximation is properly taken into account and propagated onto the estimated model uncertainties. One crucial aspect of any machine learning application is the definition of an appropriate training set. We draw the models forming the training and validation sets from previously defined prior distributions, while a finite element code provides the associated datasets. We apply the approach to two probabilistic inversion frameworks: a Markov Chain Monte Carlo algorithm is applied to synthetic data, while an ensemble-based algorithm is employed for the field measurements. For both the synthetic and field tests, the outcomes of the proposed method are benchmarked against the predictions obtained when the finite element code constitutes the forward operator. Our experiments illustrate that the network can effectively approximate the forward mapping even when a relatively small training set is created. The proposed strategy provides a forward operator three that is orders of magnitude faster than the accurate but computationally expensive finite element code. Our approach also yields most likely solutions and uncertainty quantifications comparable to those estimated when the finite element modeling is employed. The presented method allows solving the Bayesian electrical resistivity tomography with a reasonable computational cost and limited hardware resources

    Succinct Data Structures for Families of Interval Graphs

    Full text link
    We consider the problem of designing succinct data structures for interval graphs with nn vertices while supporting degree, adjacency, neighborhood and shortest path queries in optimal time in the Θ(logn)\Theta(\log n)-bit word RAM model. The degree query reports the number of incident edges to a given vertex in constant time, the adjacency query returns true if there is an edge between two vertices in constant time, the neighborhood query reports the set of all adjacent vertices in time proportional to the degree of the queried vertex, and the shortest path query returns a shortest path in time proportional to its length, thus the running times of these queries are optimal. Towards showing succinctness, we first show that at least nlogn2nloglognO(n)n\log{n} - 2n\log\log n - O(n) bits are necessary to represent any unlabeled interval graph GG with nn vertices, answering an open problem of Yang and Pippenger [Proc. Amer. Math. Soc. 2017]. This is augmented by a data structure of size nlogn+O(n)n\log{n} +O(n) bits while supporting not only the aforementioned queries optimally but also capable of executing various combinatorial algorithms (like proper coloring, maximum independent set etc.) on the input interval graph efficiently. Finally, we extend our ideas to other variants of interval graphs, for example, proper/unit interval graphs, k-proper and k-improper interval graphs, and circular-arc graphs, and design succinct/compact data structures for these graph classes as well along with supporting queries on them efficiently

    Discrete cosine transform for parameter space reduction in linear and non-linear AVA inversions

    No full text
    Geophysical inversions estimate subsurface physical parameters from the acquired data and because of the large number of model unknowns, it is common practice reparametrizing the parameter space to reduce the dimension of the problem. This strategy could be particularly useful to decrease the computational complexity of non-linear inverse problems solved through an iterative sampling procedure. However, part of the information in the original parameter space is lost in the reduced space and for this reason the model parameterization must always constitute a compromise between model resolution and model uncertainty. In this work, we use the Discrete Cosine Transform (DCT) to reparametrize linear and non-linear elastic amplitude versus angle (AVA) inversions cast into a Bayesian setting. In this framework the unknown parameters become the series of coefficients associated to the DCT base functions. We first run linear AVA inversions to exactly quantify the trade-off between model resolution and posterior uncertainties with and without the model reduction. Then, we employ the DCT to reparametrize non-linear AVA inversions numerically solved through the Differential Evolution Markov Chain and the Hamiltonian Monte Carlo algorithm. To draw general conclusions about the benefits provided by the DCT reparameterization of AVA inversion, we focus the attention on synthetic data examples in which the true models have been derived from actual well log data. The linear inversions demonstrate that the same level of model accuracy, model resolution, and data fitting can be achieved by employing a number of DCT coefficients much lower than the number of model parameters spanning the unreduced space. The non-linear inversions illustrate that an optimal model compression (a compression that guarantees optimal resolution and accurate uncertainty estimations) guarantees faster convergences toward a stable posterior distribution and reduces the burn-in period and the computational cost of the sampling procedure

    Two-step (Analytical + geostatistical) pre-stack seismic inversion for elastic properties estimation and litho-fluid facies classification

    No full text
    We infer the P-wave velocity, S-wave velocity, density, and the litho-fluid classes through a two cascade estimation steps. First, we analytically invert each seismic gather independently using a linear 1D convolutional forward operator and assuming a Gaussian-mixture prior. This step is computationally fast because no hard or lateral constraints are imposed to the recovered solution. The outcomes provided by the analytical inversion are used as auxiliary variables for a geostatistical simulation that generates the initial ensemble of models for the subsequent stage of geostatistical inversion in which the estimated models are generated and iteratively updated according to a more realistic non-parametric prior, while spatial and hard constraints are now imposed to the solution. This second step determines the model update from the match between observed and predicted seismic gathers that are computed through a 1D convolutional operator based on the full Zoeppritz equations. Synthetic inversions are used to validate the method and demonstrate that starting the second inversion step from an ensemble of models that already quite accurately reproduce the observed data allows for a fast retrieval of a subsurface model that honours the non-parametric prior, the hard constraints, and the spatial continuity patterns as coded by the variogram model

    Including edge preserving smoothing filter within blocky- constrained, target-oriented AVA inversions

    No full text
    Resolving thin layers and achieve focused layer boundaries is one of the major challenges in seismic inversion. This translates into recovering a blocky solution with sparse spatial derivatives of model parameters. Here, we present two iterative focusing regularization techniques for target-oriented Amplitude Versus Angle (AVA) inversion. Target-oriented means that only the AVA responses of the reflections of interest are inverted for the simultaneous estimation of P-wave, S-wave and density reflectivities. The first approach imposes Cauchy constraints on the spatial model derivatives whereas, the second is inspired by the minimum gradient support regularization. Both the implemented algorithms enhance their focusing and edge-preserving abilities by exploiting an Edge Preserving Smoothing (EPS) filter that is used to compute both the model constraints and the model update. We include a-priori model information into the inversion kernel to guide the convergence of the algorithms toward physically plausible solutions. The two approaches are compared against the standard Bayesian inversion that simply considers Gaussian distributed model parameters, and with the well-known edge-preserving method that assumes Cauchy-distributed derivative of model parameters. For the lack of available field seismic data, we limit the attention to synthetic inversion experiments in which we simulate different signal-to noise (S/N) ratios. The inversion tests prove the suitability of the two proposed approaches for target-oriented AVA inversion and demonstrate their focusing and anti-noise abilities. In particular, the two implemented algorithms outperform the standard Bayesian inversion and the Cauchy approach in cases of low S/N ratios. The two implemented methods are also extremely flexible and can be applied to other linear or non-linear geophysical inverse problems

    Combining discrete cosine transform and convolutional neural networks to speed up the Hamiltonian Monte Carlo inversion of pre-stack seismic data

    No full text
    Markov chain Monte Carlo algorithms are commonly employed for accurate uncertainty appraisals in non-linear inverse problems. The downside of these algorithms is the considerable number of samples needed to achieve reliable posterior estimations, especially in high-dimensional model spaces. To overcome this issue, the Hamiltonian Monte Carlo algorithm has recently been introduced to solve geophysical inversions. Different from classical Markov chain Monte Carlo algorithms, this approach exploits the derivative information of the target posterior probability density to guide the sampling of the model space. However, its main downside is the computational cost for the derivative computation (i.e. the computation of the Jacobian matrix around each sampled model). Possible strategies to mitigate this issue are the reduction of the dimensionality of the model space and/or the use of efficient methods to compute the gradient of the target density. Here we focus the attention to the estimation of elastic properties (P-, S-wave velocities and density) from pre-stack data through a non-linear amplitude versus angle inversion in which the Hamiltonian Monte Carlo algorithm is used to sample the posterior probability. To decrease the computational cost of the inversion procedure, we employ the discrete cosine transform to reparametrize the model space, and we train a convolutional neural network to predict the Jacobian matrix around each sampled model. The training data set for the network is also parametrized in the discrete cosine transform space, thus allowing for a reduction of the number of parameters to be optimized during the learning phase. Once trained the network can be used to compute the Jacobian matrix associated with each sampled model in real time. The outcomes of the proposed approach are compared and validated with the predictions of Hamiltonian Monte Carlo inversions in which a quite computationally expensive, but accurate finite-difference scheme is used to compute the Jacobian matrix and with those obtained by replacing the Jacobian with a matrix operator derived from a linear approximation of the Zoeppritz equations. Synthetic and field inversion experiments demonstrate that the proposed approach dramatically reduces the cost of the Hamiltonian Monte Carlo inversion while preserving an accurate and efficient sampling of the posterior probability

    Ensemble-Based Electrical Resistivity Tomography with Data and Model Space Compression

    Get PDF
    Inversion of electrical resistivity tomography (ERT) data is an ill-posed problem that is usually solved through deterministic gradient-based methods. These methods guarantee a fast convergence but hinder accurate assessments of model uncertainties. On the contrary, Markov Chain Monte Carlo (MCMC) algorithms can be employed for accurate uncertainty appraisals, but they remain a formidable computational task due to the many forward model evaluations needed to converge. We present an alternative approach to ERT that not only provides a best-fitting resistivity model but also gives an estimate of the uncertainties affecting the inverse solution. More specifically, the implemented method aims to provide multiple realizations of the resistivity values in the subsurface by iteratively updating an initial ensemble of models based on the difference between the predicted and measured apparent resistivity pseudosections. The initial ensemble is generated using a geostatistical method under the assumption of log-Gaussian distributed resistivity values and a Gaussian variogram model. A finite-element code constitutes the forward operator that maps the resistivity values onto the associated apparent resistivity pseudosection. The optimization procedure is driven by the ensemble smoother with multiple data assimilation, an iterative ensemble-based algorithm that performs a Bayesian updating step at each iteration. The main advantages of the proposed approach are that it can be applied to nonlinear inverse problems, while also providing an ensemble of models from which the uncertainty on the recovered solution can be inferred. The ill-conditioning of the inversion procedure is decreased through a discrete cosine transform reparameterization of both data and model spaces. The implemented method is first validated on synthetic data and then applied to field data. We also compare the proposed method with a deterministic least-square inversion, and with an MCMC algorithm. We show that the ensemble-based inversion estimates resistivity models and associated uncertainties comparable to those yielded by a much more computationally intensive MCMC sampling

    A gradient-based Markov chain Monte Carlo algorithm for elastic pre-stack inversion with data and model space reduction

    No full text
    The main challenge of Markov chain Monte Carlo sampling is to define a proposal distribution that simultaneously is a good approximation of the posterior probability while being inexpensive to manipulate. We present a gradient-based Markov chain Monte Carlo inversion of pre-stack seismic data in which the posterior sampling is accelerated by defining a proposal that is a local, Gaussian approximation of the posterior model, while a non-parametric prior is assumed for the distribution of the elastic properties. The proposal is constructed from the local Hessian and gradient information of the log posterior, whereas the non-linear, exact Zoeppritz equations constitute the forward modelling engine for the inversion procedure. Hessian and gradient information is made computationally tractable by a reduction of data and model spaces through a discrete cosine transform reparameterization. This reparameterization acts as a regularization operator in the model space, while also preserving the spatial and temporal continuity of the elastic properties in the sampled models. We test the implemented algorithm on synthetic pre-stack inversions under different signal-to-noise ratios in the observed data. We also compare the results provided by the presented method when a computationally expensive (but accurate) finite-difference scheme is used for the Jacobian computation, with those obtained when the Jacobian is derived from the linearization of the exact Zoeppritz equations. The outcomes of the proposed approach are also compared against those yielded by a gradient-free Monte Carlo sampling and by a deterministic least-squares inversion. Our tests demonstrate that the gradient-based sampling reaches accurate uncertainty estimations with a much lower computational effort than the gradient-free approach
    corecore