72 research outputs found
Development of Grid e-Infrastructure in South-Eastern Europe
Over the period of 6 years and three phases, the SEE-GRID programme has
established a strong regional human network in the area of distributed
scientific computing and has set up a powerful regional Grid infrastructure. It
attracted a number of user communities and applications from diverse fields
from countries throughout the South-Eastern Europe. From the infrastructure
point view, the first project phase has established a pilot Grid infrastructure
with more than 20 resource centers in 11 countries. During the subsequent two
phases of the project, the infrastructure has grown to currently 55 resource
centers with more than 6600 CPUs and 750 TBs of disk storage, distributed in 16
participating countries. Inclusion of new resource centers to the existing
infrastructure, as well as a support to new user communities, has demanded
setup of regionally distributed core services, development of new monitoring
and operational tools, and close collaboration of all partner institution in
managing such a complex infrastructure. In this paper we give an overview of
the development and current status of SEE-GRID regional infrastructure and
describe its transition to the NGI-based Grid model in EGI, with the strong SEE
regional collaboration.Comment: 22 pages, 12 figures, 4 table
Parallel computing 2011, ParCo 2011: book of abstracts
This book contains the abstracts of the presentations at the conference Parallel Computing 2011, 30 August - 2 September 2011, Ghent, Belgiu
Project Final Report: Ubiquitous Computing and Monitoring System (UCoMS) for Discovery and Management of Energy Resources
The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively on such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications
Optimisation of the enactment of fine-grained distributed data-intensive work flows
The emergence of data-intensive science as the fourth science paradigm has posed a
data deluge challenge for enacting scientific work-flows. The scientific community is
facing an imminent flood of data from the next generation of experiments and simulations,
besides dealing with the heterogeneity and complexity of data, applications and
execution environments. New scientific work-flows involve execution on distributed and
heterogeneous computing resources across organisational and geographical boundaries,
processing gigabytes of live data streams and petabytes of archived and simulation data,
in various formats and from multiple sources. Managing the enactment of such work-flows not only requires larger storage space and faster machines, but the capability to
support scalability and diversity of the users, applications, data, computing resources
and the enactment technologies.
We argue that the enactment process can be made efficient using optimisation techniques
in an appropriate architecture. This architecture should support the creation
of diversified applications and their enactment on diversified execution environments,
with a standard interface, i.e. a work-flow language. The work-flow language should
be both human readable and suitable for communication between the enactment environments.
The data-streaming model central to this architecture provides a scalable
approach to large-scale data exploitation. Data-flow between computational elements
in the scientific work-flow is implemented as streams. To cope with the exploratory
nature of scientific work-flows, the architecture should support fast work-flow prototyping,
and the re-use of work-flows and work-flow components. Above all, the enactment
process should be easily repeated and automated.
In this thesis, we present a candidate data-intensive architecture that includes an intermediate
work-flow language, named DISPEL. We create a new fine-grained measurement
framework to capture performance-related data during enactments, and design
a performance database to organise them systematically. We propose a new enactment
strategy to demonstrate that optimisation of data-streaming work-flows can be
automated by exploiting performance data gathered during previous enactments
Improved carbonate reservoir characterisation : a case study from the mid-Cretaceous Mishrif reservoir in the giant West Qurna/1 oilfield, Southern Iraq
The mid-Cretaceous Mishrif carbonate reservoir in West Qurna/1 oilfield is characterized
by strong heterogeneity, tidal channels, and a complicated faults system which have very
strong effects on the fluid flow and can result in unrealistic forecasted behaviour of the
reservoir. The central hypothesis of this thesis is that two-dimensional seismic data and
well data do not delineate the reservoir channels sufficiently and their variable fairway
patterns. Hence, there is a need for a high-resolution 3D seismic dataset to explore
reservoir characterisation including channel geometries more accurately. This thesis
focuses mainly on porosity characterisation of the Mishrif channelized reservoir. It aims
to delineate the Mishrif channel fairways with their intrinsic complexity then characterize
the channel fairway’s reservoir properties, such as the porosity, and lithology, especially
in new areas that have no well control.
The thesis project was divided into three stages. The first stage focuses on the seismic
reservoir characterisation of one of the Middle East's largest complex carbonate reservoirs
in the West Qurna/1 oilfield, which hosts a complex internal architecture characterized
by several tidal channels whose deposits may give good reservoir properties. In the
second stage, multisource data was used to establish the essential workflow elements for
characterizing Mishrif tidal channel fairways. The final stage incorporates 3D seismic
data as a secondary variable into the property modelling to explore a more channels
distribution using a combined dataset workflow.
It was concluded that the seismic inversion interpretation demonstrates promising results,
with the model-based inversion performing better than the linear programming sparse
spike (LPSS). We interpreted the lithological variation in the Mishrif mA zone based on
the model-based inversion, including high-energy corals, mounds, and rudist shoal facies
that were not observed previously. Also, we noticed that the seismically derived porosity
improved our understanding by providing the realistic distribution of the Mishrif
channel's porosity. A variety of approaches has been suggested to characterizing the
Mishrif carbonate tidal channels. It was observed that well data analysis and thin section
micrographs provided a good understanding of Mishrif channelized facies. Also, modern
channels and outcrop scales were highly valuable in acquiring information for the
comparison with channel fairways detected in the Mishrif reservoir. Our study found that
spectral decomposition with the colour blending of three frequency intervals provides a better geo-body extraction of the Mishrif mB1 channelized zone than the other seismic
attribute surfaces. We analysed the results of the probabilistic neural network PNN
algorithm and found that the Mishrif mB1 zone is clustered into two different
heterogeneity-quality lithofacies (channels and restricted lagoon facies). We incorporated
seismic inversion into the 3D property model with a different weighting of the correlation
coefficients in the mB1 channelized zone. Thus, we observed that the constrained model
combining well log data and seismic data as a secondary variable yields better channel
fairway delineation with a moderate correlation coefficient weighting, and high weighting
impacted the channel distribution. The findings of this thesis can be applied in other
scenarios, such as contaminant transport in groundwater resources, or CO2 storage
Efficient Tunnel Detection with Waveform Inversion of Back-scattered Surface Waves
An efficient subsurface imaging method employing back-scattered surface waves is developed to detect near-surface underground elastic-wave velocity anomalies, such as tunnels, sinkholes, fractures, faults, and abandoned manmade infrastructures. The back-scattered surface waves are generated by seismic waves impinging on the velocity anomalies and diffracting back toward the source. These wave events contain plentiful information of the subsurface velocity anomalies including spatial location, shape, size, and velocity of the interior medium. Studies have demonstrated that the back-scattered surface waves can be easily distinguished in the frequency-wavenumber (F-k) domain and have less interference by other wave modes. Based on these features, a near-surface velocity anomaly detection method by using waveform inversion of the back-scattered surface waves (BSWI) is proposed. The main objective of this thesis is to review the theoretical background and study the feasibility of the proposed BSWI method. The proposed BSWI method is tested with numerical and real-world examples. First, the numerical example uses the conventional full-waveform inversion (FWI) method as a benchmark to demonstrate the efficiency of BSWI method in detecting shallow velocity anomalies. Then, the BSWI method is tested with field data. In this study, 2D seismic data were acquired over a manmade concrete tunnel located on the main campus of the University of Kansas (KU). Different workflows including FWI method and BSWI method are applied to the acquired data and tested for imaging the known tunnel. The field example demonstrates that BSWI can accurately image the tunnel. Compared with FWI, BSWI is less demanding in data processing. Finally, this thesis concludes that the proposed BSWI method is capable of efficiently detecting a near-surface tunnel with the minimum amount of data processing which lends it as a method suitable for application in the field
Recommended from our members
Parameter selection in seismic data analysis problems
Seismic imaging is an essential tool for non-invasive subsurface evaluation. It enables Earth scientists to create a picture of the planet's interior, predicting the rocks and structures that lie below. This can enable characterization of tectonic margins to better understand the deep history of the planet, delineation of aquifers to provide water, and the safe and economic exploration for commercial oil and gas accumulations for energy production.
To generate these images numerous observations of the subsurface are taken and they are transformed to a common domain where observations of the same point in the subsurface overlay. These transformations typically are linear on the observed data and usually depend on a parameter related to seismic wave propagation, like the speed at which a seismic wave travels through the subsurface, in a non-linear manner. Selecting and determining these parameters is a crucial step in the generation of seismic images. Using inaccurate parameters in the transformations involved in seismic data processing results in seismic images that are distorted, inaccurate representations of the subsurface. Because these parameters are related to seismic wave propagation, their values can provide insight into the composition of the Earth's interior, including the rocks or fluids present.
In this dissertation, I present methods for accurately determining those parameters and how they may be used to efficiently generate accurate, well resolved images of the Earth's interior. I show how dynamic time warping may be used to create an operator which efficiently corrects for the blurring and distortion present in seismic images caused by seismic anisotropy, or wave propagation speed changing with the direction of travel, while simultaneously characterizing and quantifying that anisotropy. I demonstrate how slope-decomposed seismic images may be transported along their characteristics in a process called oriented velocity continuation to efficiently generate a suite of images over a range of plausible migration velocities, and how oriented velocity continuation may be used with seismic diffraction imaging to determine migration velocity. The use of oriented velocity continuation is further expanded on to generate a framework for probabilistic diffraction imaging using a collection of weights computed from slope-decomposed images that represent the probability of a correctly imaged diffraction existing at a point in space for a given migration velocity, while simultaneously outputting the most likely migration velocity at each point in space. This method generates seismic images with significantly improved signal to noise ratios compared to conventional approaches. Finally, I formulate a variational method for picking an optimal surface representing how a parameter evolves in space from a volume representing the quality of fit for different parameter values based on iteratively minimizing a functional. I prove that minimizers for that functional exist, and that an iterative method will converge to a minimizer in an infinite dimensional setting. The method is applied using continuation, or graduated optimization, to avoid local minima and used to determine seismic velocities as a component of seismic processing workflows and perform automatic interpretation of a seismic horizon.Computational Science, Engineering, and Mathematic
Shallow P-wave seismic reflection event for estimating the static time corrections: implications for 3D seismic structural interpretation, Ellis County, Kansas
Master of ScienceDepartment of GeologyAbdelmoneam RaefMatthew W. TottenIn a processing flow of 2D or 3D seismic data, there are many steps that must be completed in seismic processing to produce a dataset in suitable for seismic interpretation. In case of land seismic data, it is very essential that the data-processing work flow create and utilize a static time correction to eradicate variations in arrival time associated with changes in the topography and low-velocity near surface geology (Krey 1954). This project utilizes velocity analysis, based on a near-surface reflection, to estimate near surface statics corrections to a datum at elevation of 1300 ft (Sheriff and Geldart 1995, Rogers 1981). Reviewing and Rectifying errors in geometrical aspects of the field seismic data is essential to the validity of velocity analysis and estimation. To this end, geometrical aspects of the data were validated based on spatial aspects of the survey acquisition design and acquired data attributes. The seismic workflow is a conglomeration of many steps, of which, none should be overlooked or given insufficient attention. The seismic processing workflow spans from loading the data into a processing software with the correct geometry to stacking and binning the traces for exportation to interpretation software as a seismic volume. Important steps within this workflow and ones that will be covered in this thesis include; the framework to reverse engineer a survey geometry, dynamic corrections, velocity analysis, and building of a static model to account for the near surface, or low velocity layer. This seismic processing workflow seeks to quality control most, if not all, seismic datasets in hopes to produce higher quality and more accurate three-dimensional seismic volumes for interpretation. The developed workflow represents cost-effective, rapid approach of improving the structural fidelity of land seismic data in areas with rugged topography and complex near-surface velocity variation (Selem 1955; Thralls and Mossman 1952)
Efficient computation of seismic traveltimes in anisotropic media and the application in pre-stack depth migration
This study is concerned with the computation of seismic first-arrival traveltimes in anisotropic media using finite difference eikonal methods. For this purpose, different numerical schemes that directly solve the eikonal equation are implemented and assessed numerically. Subsequently, they are used for pre-stack depth migration on synthetic and field data.
The thesis starts with a detailed examination of different finite difference methods that have gained popularity in scientific literature for computing seismic traveltimes in isotropic media. The most appropriate for an extension towards anisotropic media are found to be the so-called Fast Marching/Sweeping methods. Both schemes rely on different iteration strategies, but incorporate the same upwind finite difference Godunov schemes that are implemented up to the second order. As a result, the derived methods exhibit high numerical accuracy and perform robustly even in highly contrasted velocity models.
Subsequently, the methods are adapted for transversely isotropic media with vertical (VTI) and tilted (TTI) symmetry axes, respectively. Therefore, two different formulations for approximating the anisotropic phase velocities are tested, which are the weakly-anisotropic and the pseudo-acoustic approximation. As expected, the pseudo-acoustic formulation shows superior accuracy especially for strongly anisotropic media. Moreover, it turns out that the tested eikonal schemes are generally more accurate than anisotropic ray tracing approaches, since they do not require an approximation of the group velocity.
Numerical experiments are carried out on homogeneous models with varying strengths of anisotropy and the industrial BP 2007 benchmark model. They show that the computed eikonal traveltimes are in good agreement with independent results from finite difference modelling of the isotropic and anisotropic elastic wave equations, and traveltimes estimated by ray-based wavefront construction, respectively. The computational performance of the TI eikonal schemes is largely increased compared to their original isotropic implementations, which is due to the algebraic complexity of the anisotropic phase velocity formulations. At this point, the Fast Marching Method is found to be more efficient on models containing up to 50 million grid points. For larger models, the anisotropic Fast Sweeping implementation gradually becomes advantageous. Here, both techniques perform independently well of the structural complexity of the underlying velocity model.
The final step of this thesis is the application of the developed eikonal schemes in pre-stack depth migration. A synthetic experiment over a VTI/TTI layer-cake model demonstrates that the traveltime computation leads to accurate imaging results including a tilted, strongly anisotropic shale layer. The experiment shows further that the estimation of anisotropic velocity models solely from surface reflection data is highly ambiguous. In a second example, the eikonal solvers are applied for depth imaging of two-dimensional field data that were acquired for geothermal exploration in southern Tuscany, Italy. The developed methods also produce clear imaging results in this setting, which illustrates their general applicability for pre-stack depth imaging, particularly in challenging environments
A Fully Parallelized and Budgeted Multi-level Monte Carlo Framework for Partial Differential Equations: From Mathematical Theory to Automated Large-Scale Computations
All collected data on any physical, technical or economical process is subject to uncertainty. By incorporating this uncertainty in the model and propagating it through the system, this data error can be controlled. This makes the predictions of the system more trustworthy and reliable. The multi-level Monte Carlo (MLMC) method has proven to be an effective uncertainty quantification tool, requiring little knowledge about the problem while being highly performant.
In this doctoral thesis we analyse, implement, develop and apply the MLMC method to partial differential equations (PDEs) subject to high-dimensional random input data. We set up a unified framework based on the software M++ to approximate solutions to elliptic and hyperbolic PDEs with a large selection of finite element methods. We combine this setup with a new variant of the MLMC method. In particular, we propose a budgeted MLMC (BMLMC) method which is capable to optimally invest reserved computing resources in order to minimize the model error while exhausting a given computational budget. This is achieved by developing a new parallelism based on a single distributed data structure, employing ideas of the continuation MLMC method and utilizing dynamic programming techniques. The final method is theoretically motivated, analyzed, and numerically well-tested in an automated benchmarking workflow for highly challenging problems like the approximation of wave equations in randomized media
- …