187 research outputs found

    A Review of Geophysical Modeling Based on Particle Swarm Optimization

    Get PDF
    This paper reviews the application of the algorithm particle swarm optimization (PSO) to perform stochastic inverse modeling of geophysical data. The main features of PSO are summarized, and the most important contributions in several geophysical felds are analyzed. The aim is to indicate the fundamental steps of the evolution of PSO methodologies that have been adopted to model the Earth’s subsurface and then to undertake a critical evaluation of their benefts and limitations. Original works have been selected from the existing geophysical literature to illustrate successful PSO applied to the interpretation of electromagnetic (magnetotelluric and time-domain) data, gravimetric and magnetic data, self-potential, direct current and seismic data. These case studies are critically described and compared. In addition, joint optimization of multiple geophysical data sets by means of multi-objective PSO is presented to highlight the advantage of using a single solver that deploys Pareto optimality to handle diferent data sets without conficting solutions. Finally, we propose best practices for the implementation of a customized algorithm from scratch to perform stochastic inverse modeling of any kind of geophysical data sets for the beneft of PSO practitioners or inexperienced researchers

    Population-based algorithms for improved history matching and uncertainty quantification of Petroleum reservoirs

    Get PDF
    In modern field management practices, there are two important steps that shed light on a multimillion dollar investment. The first step is history matching where the simulation model is calibrated to reproduce the historical observations from the field. In this inverse problem, different geological and petrophysical properties may provide equally good history matches. Such diverse models are likely to show different production behaviors in future. This ties the history matching with the second step, uncertainty quantification of predictions. Multiple history matched models are essential for a realistic uncertainty estimate of the future field behavior. These two steps facilitate decision making and have a direct impact on technical and financial performance of oil and gas companies. Population-based optimization algorithms have been recently enjoyed growing popularity for solving engineering problems. Population-based systems work with a group of individuals that cooperate and communicate to accomplish a task that is normally beyond the capabilities of each individual. These individuals are deployed with the aim to solve the problem with maximum efficiency. This thesis introduces the application of two novel population-based algorithms for history matching and uncertainty quantification of petroleum reservoir models. Ant colony optimization and differential evolution algorithms are used to search the space of parameters to find multiple history matched models and, using a Bayesian framework, the posterior probability of the models are evaluated for prediction of reservoir performance. It is demonstrated that by bringing latest developments in computer science such as ant colony, differential evolution and multiobjective optimization, we can improve the history matching and uncertainty quantification frameworks. This thesis provides insights into performance of these algorithms in history matching and prediction and develops an understanding of their tuning parameters. The research also brings a comparative study of these methods with a benchmark technique called Neighbourhood Algorithms. This comparison reveals the superiority of the proposed methodologies in various areas such as computational efficiency and match quality

    Evolutionary Computation 2020

    Get PDF
    Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms

    An Equivalent Point-Source Stochastic Model of the NGA-East Ground-Motion Models and a Seismological Method for Estimating the Long-Period Transition Period TL

    Get PDF
    This dissertation deals with the stochastic simulation of the Next Generation Attenuation- East (NGA-East) ground-motion models and a proposing a new method of calculating the long-period transition period parameter, TL, in the seismic building codes. The work of this dissertation is carried out in two related studies. In the first study, a set of correlated and consistent seismological parameters are estimated in the in Central and Eastern United States (CEUS) by inverting the median 5%-damped spectral acceleration (PSA) predicted from the Next Generation Attenuation-East (NGA-East) ground-motion models (GMMs). These seismological parameters together form a point-source stochastic GMM. Magnitude-specific inversions are performed for moment magnitude ranges Mw 4.0-8.0, rupture distances Rrup = 1-1000 km and periods T = 0.01-10s, and National Earthquake Hazard Reduction Program site class A conditions. In the second study, the long-period transition period parameter TL is investigated, and an alternate seismological approach is used to calculate it. The long-period transition period parameter is utilized in the determination of the design spectral acceleration of long-period structures. The estimation of TL has remained unchanged since its original introduction FEMA 450-1/2003; The calculation is loosely based on a correlation between modal magnitude Mw and TL that does not account for different seismological parameters in different regions of the country. This study will calculate TL based on the definition of corner period, and will include two seismological parameters, the stress parameters Δσ and crustal velocity in the source region β, in its estimation. The results yield a generally more conservative (or longer) estimation of TL than the estimation that is currently used in engineering design standards

    Contour Dynamics Methods

    Get PDF
    In an early paper on the stability of fluid layers with uniform vorticity Rayleigh (1880) remarks: "... In such cases, the velocity curve is composed of portions of straight lines which meet each other at finite angles. This state of things may be supposed to be slightly disturbed by bending the surfaces of transition, and the determination of the subsequent motion depends upon that of the form of these surfaces. For co retains its constant value throughout each layer unchanged in the absence of friction, and by a well-known theorem the whole motion depends upon [omega]." We can now recognize this as essentially the principal of contour dynamics (CD), where [omega] is the uniform vorticity. The theorem referred to is the Biot-Savart law. Nearly a century later Zabusky et al (1979) presented numerical CD calculations of nonlinear vortex patch evolution. Subsequently, owing to its compact form conferring a deceptive simplicity, CD has become a widely used method for the investigation of two-dimensional rotational flow of an incompressible inviscid fluid. The aim of this article is to survey the development, technical details, and vortex-dynamic applications of the CD method in an effort to assess its impact on our understanding of the mechanics of rotational flow in two dimensions at ultrahigh Reynolds numbers. The study of the dynamics of two- and three-dimensional vortex mechanics by computational methods has been an active research area for more than two decades. Quite apart from many practical applications in the aerodynamics of separated flows, the theoretical and numerical study of vortices in incompressible fluids has been stimulated by the idea that turbulent fluid motion may be viewed as comprising ensembles of more or less coherent laminar vortex structures that interact via relatively simple dynamics and by the appeal of the vorticity equation, which does not contain the fluid pressure. Two-dimensional vortex interactions have been perceived as supposedly relevant to the origins of coherent structures observed experimentally in mixing layers, jets, and wakes, and for models of large-scale atmospheric and oceanic turbulence. Interest has often focused on the limit of infinite Reynolds number, where in the absence of boundaries, the inviscid Euler equations are assumed to properly describe the flow dynamics. The numerous surveys of progress in the study of vorticity and the use of numerical methods applied to vortex mechanics include articles by Saffman & Baker (1979) and Saffman (1981) on inviscid vortex interactions and Aref (1983) on two-dimensional flows. Numerical methods have been surveyed by Chorin (1980), and Leonard (1980, 1985). Caflisch (1988) describes work on the mathematical aspects of the subject. Zabusky (1981), Aref (1983), and Melander et al (1987b) discuss various aspects of CD. The review of Dritschel (1989) gives emphasis to numerical issues in CD and to recent computations with contour surgery. This article is confined to a discussion of vortices on a two-dimensional surface. We generally follow Saffman & Baker (1979) in matters of definition. In two dimensions a vortex sheet is a line of discontinuity in velocity while a vortex jump is a line of discontinuity in vorticity. We shall, however, use filament to denote a two-dimensional ribbon of vorticity surrounded by fluid with vorticity of different magnitude (which may be zero), rather than the more usual three-dimensional idea of a vortex tube. The ambiguity is unfortunate but is already in the literature. Additionally, a vortex patch is a finite, singly connected area of uniform vorticity while a vortex strip is an infinite strip of uniform vorticity with finite thickness, or equivalently, an infinite filament. Contour Dynamics will refer to the numerical solution of initial value problems for piecewise constant vorticity distributions by the Lagrangian method of calculating the evolution of the vorticity jumps. Such flows are often related to corresponding solutions of the Euler equations that are steady in some translating or rotating frame of reference. These solutions will be called vortex equilibria, and the numerical technique for computing their shapes based on CD is often referred to as contour statics. The mathematical foundation for the study of vorticity was laid primarily by the well-known investigations of Helmholtz, Kelvin, J. J. Thomson, Love, and others. In our century, efforts to produce numerical simulations of flows governed by the Euler equations have utilized a variety of Eulerian, Lagrangian, and hybrid methods. Among the former are the class of spectral methods that now comprise the prevailing tool for large-scale two- and three-dimensional calculations (see Hussaini & Zang 1987). The Lagrangian methods for two-dimensional flows have been predominantly vortex tracking techniques based on the Helmholtz vorticity laws. The first initial value calculations were those of Rosenhead (193l) and Westwater (1935) who attempted to calculate vortex sheet evolution by the motion of O(10) point vortices. Subsequent efforts by Moore (1974) (see also Moore 1983, 1985) and others to produce more refined computations for vortex sheets have failed for reasons related to the tendency for initially smooth vortex sheet data to produce singularities (Moore 1979). Discrete vortex methods used to study the nonlinear dynamics of vortex patches and layers have included the evolution of assemblies of point vortices by direct summation (e.g. Acton 1976) and the cloud in cell method (Roberts & Christiansen 1972, Christiansen & Zabusky 1973, Aref & Siggia 1980, 1981). For reviews see Leonard (1980) and Aref (1983). These techniques have often been criticized for their lack of accuracy and numerical convergence and because they may be subject to grid scale dispersion. However, many qualitative vortex phenomena observed in nature and in experiments, such as amalgamation events and others still under active investigation (e.g. filamentation) were first simulated numerically with discrete vortices. The contour dynamics approach is attractive because it appears to allow direct access, at least for small times, to the inviscid dynamics for vorticity distributions smoother than those of either point vortices or vortex sheets, while at the same time enabling the mapping of the two-dimensional Euler equations to a one-dimensional Lagrangian form. In Section 2 we discuss the formulation and numerical implementation of contour dynamics for the Euler equations in two dimensions. Section 3 is concerned with applications to isolated and multiple vortex systems and to vortex layers. An attempt is made to relate this work to calculations of the relevant vortex equilibria and to results obtained with other methods. Axisymmetric contour dynamics and the treatment of the multi-layer model of quasigeostrophic flows are described in Section 4 while Section 5 is devoted to a discussion of the tendency shown by vorticity jumps to undergo the strange and subtle phenomenon of filamentation

    Inversion strategies for seismic surface waves and time-domain electromagnetic data with application to geotechnical characterization examples.

    Get PDF
    Geophysical methods are broadly used to map the subsurface. Their ability to investigate large areas in a short time and to reach significant depths with good resolution makes them suitable for a wide range of applications: from hydrological studies, mineral exploration, archaeological investigations to geotechnical characterization. Unfortunately, most of the geophysical methods are ill-posed. Thus, to be able to effectively invert the geophysical data and get meaningful models of the subsurface a priori information needs to be included in the process. This is the basic idea behind the inversion theory. This thesis deals with the inversion of two types of geophysical measurements: the Seismic Surface Waves (SSW) data and the Time Domain Electromagnetic (TDEM) observations. The present work consists of two parts: (1) The first one is about possible implementations of the minimum gradient support stabilizer into a SSW inversion routine and its extension to the laterally constrained case. By means of this novel approach, it is possible to tune the level of sparsity of the reconstructed velocity model, providing a solution with the desirable characteristics (smooth or sharp) in both directions (vertically and laterally). The capabilities of the proposed approach have been tested via applications on synthetic and measured data. (2) The second part of the thesis is about the joint interpretation of SSW and TDEM measurements for an improved geotechnical characterization of an area intended for construction. In this case, the SSW results, together with other ancillary data, are used as prior information for the subsequent inversion of TDEM measurements. In this respect, the SSW results have been translated into pieces of information to be used in the TDEM inversion via a petrophysical relationship. This work is coherent with one of the goals of the United Nations Agenda 2030 for sustainable development, specifically, the item 11b, as geotechnical characterization is one of the essential components for the design of civil engineering works, ensuring the necessary safety and resilience to natural disasters and climate change. However, the field of application of the proposed approaches is very broad as they can also be used, e.g., for groundwater mapping, as well as for the evaluation of aquifer contamination. In this respect, the present work is also in line with items 6.1, 6.3 and 6.4 of the 2030 UN Agenda

    Advanced Applications for Underwater Acoustic Modeling

    Get PDF

    Integrated High-Resolution Modeling for Operational Hydrologic Forecasting

    Get PDF
    Current advances in Earth-sensing technologies, physically-based modeling, and computational processing, offer the promise of a major revolution in hydrologic forecasting—with profound implications for the management of water resources and protection from related disasters. However, access to the necessary capabilities for managing information from heterogeneous sources, and for its deployment in robust-enough modeling engines, remains the province of large governmental agencies. Moreover, even within this type of centralized operations, success is still challenged by the sheer computational complexity associated with overcoming uncertainty in the estimation of parameters and initial conditions in large-scale or high-resolution models. In this dissertation we seek to facilitate the access to hydrometeorological data products from various U.S. agencies and to advanced watershed modeling tools through the implementation of a lightweight GIS-based software package. Accessible data products currently include gauge, radar, and satellite precipitation; stream discharge; distributed soil moisture and snow cover; and multi-resolution weather forecasts. Additionally, we introduce a suite of open-source methods aimed at the efficient parameterization and initialization of complex geophysical models in contexts of high uncertainty, scarce information, and limited computational resources. The developed products in this suite include: 1) model calibration based on state of the art ensemble evolutionary Pareto optimization, 2) automatic parameter estimation boosted through the incorporation of expert criteria, 3) data assimilation that hybridizes particle smoothing and variational strategies, 4) model state compression by means of optimized clustering, 5) high-dimensional stochastic approximation of watershed conditions through a novel lightweight Gaussian graphical model, and 6) simultaneous estimation of model parameters and states for hydrologic forecasting applications. Each of these methods was tested using established distributed physically-based hydrologic modeling engines (VIC and the DHSVM) that were applied to watersheds in the U.S. of different sizes—from a small highly-instrumented catchment in Pennsylvania, to the basin of the Blue River in Oklahoma. A series of experiments was able to demonstrate statistically-significant improvements in the predictive accuracy of the proposed methods in contrast with traditional approaches. Taken together, these accessible and efficient tools can therefore be integrated within various model-based workflows for complex operational applications in water resources and beyond

    Adequate model complexity and data resolution for effective constraint of simulation models by 4D seismic data

    Get PDF
    4D seismic data bears valuable spatial information about production-related changes in the reservoir. It is a challenging task though to make simulation models honour it. Strict spatial tie of seismic data requires adequate model complexity in order to assimilate details of seismic signature. On the other hand, not all the details in the seismic signal are critical or even relevant to the flow characteristics of the simulation model so that fitting them may compromise the predictive capability of models. So, how complex should be a model to take advantage of information from seismic data and what details should be matched? This work aims to show how choices of parameterisation affect the efficiency of assimilating spatial information from the seismic data. Also, the level of details at which the seismic signal carries useful information for the simulation model is demonstrated in light of the limited detectability of events on the seismic map and modelling errors. The problem of the optimal model complexity is investigated in the context of choosing model parameterisation which allows effective assimilation of spatial information in the seismic map. In this study, a model parameterisation scheme based on deterministic objects derived from seismic interpretation creates bias for model predictions which results in poor fit of historic data. The key to rectifying the bias was found to be increasing the flexibility of parameterisation by either increasing the number of parameters or using a scheme that does not impose prior information incompatible with data such as pilot points in this case. Using the history matching experiments with a combined dataset of production and seismic data, a level of match of the seismic maps is identified which results in an optimal constraint of the simulation models. Better constrained models were identified by quality of their forecasts and closeness of the pressure and saturation state to the truth case. The results indicate that a significant amount of details in the seismic maps is not contributing to the constructive constraint by the seismic data which is caused by two factors. First is that smaller details are a specific response of the system-source of observed data, and as such are not relevant to flow characteristics of the model, and second is that the resolution of the seismic map itself is limited by the seismic bandwidth and noise. The results suggest that the notion of a good match for 4D seismic maps commonly equated to the visually close match is not universally applicable
    corecore