1,182 research outputs found

    Ontology based data warehousing for mining of heterogeneous and multidimensional data sources

    Get PDF
    Heterogeneous and multidimensional big-data sources are virtually prevalent in all business environments. System and data analysts are unable to fast-track and access big-data sources. A robust and versatile data warehousing system is developed, integrating domain ontologies from multidimensional data sources. For example, petroleum digital ecosystems and digital oil field solutions, derived from big-data petroleum (information) systems, are in increasing demand in multibillion dollar resource businesses worldwide. This work is recognized by Industrial Electronic Society of IEEE and appeared in more than 50 international conference proceedings and journals

    Shear-wave and spatial attributes in time-lapse 3-D/3-C seismic and potential-field datasets

    Get PDF
    In this study, I utilize multicomponent time-lapse seismic datasets for investigating subtle seismic properties of Weyburn reservoir undergoing enhanced oil recovery and geologic sequestration of CO2. The primary focus is on extracting shear-wave information from surface three-dimensional and three-component (3-D/3-C) reflection datasets. Four groups of interrelated objectives are addressed: 1) calibrated and true-amplitude processing of multicomponent time-lapse seismic data, 2) extraction of amplitude variations with angle (AVA) and offset (AVO) attributes for separating pressure and fluid-saturation effects within the reservoir, 3) development of receiver-function methods for investigating the shallow subsurface, and 4) 2-D spatial pattern analysis of attribute maps, intended for automated interpretation of the results and a new type of AVO analysis. To achieve the first of these objectives, I reprocess the field surface 3-C/3-D reflection datasets by using pre-stack waveform calibration followed by complete reflection processing using commercial ProMAX software. For the second, principal objective of this study, several AVA attributes of the reservoir are examined, including those related to P- and P/S- converted waves and P- and S-wave impedances. The amplitudes and AVA attributes derived from seismic data indicate temporal variations potentially caused by pore-pressure and CO2-saturation variations within the reservoir. By comparing with AVA forward models, the seismic data suggest correlations between the increasing pore pressure and decreasing AVA intercepts and increasing AVA gradients. Increasing CO2 saturations appear to correlate with simultaneously decreasing AVA intercepts and gradients. CO2-saturated zones are thus interpreted as Class III AVA anomalies. In order to take further advantage from 3-C recordings and investigate advanced methods for S-wave seismic data analysis, receiver functions are used to study the shallow near-surface structure. This is apparently the first application of this method to reflection seismic datasets on land and in a time-lapse 3-D dataset. I show that it is feasible and useful to measure the near-surface S-wave velocity structure by using multi-component seismic data. From Weyburn reflection data, the average mapped receiver-function time lags are about 35 ms, which corresponds to near-surface S-wave velocities of about 550 m/s. Time-lapse variations of the near-surface structure are measured, and S-wave statics models are derived. Such models can be useful for converted-wave seismic imaging. The last objective of this Dissertation is to develop tools for interpretation of gridded 2-D spatial images, such as mapping AVO attribute quantitatively and automatically. For this purpose, a new pattern-recognition approach called skeletonization is developed and applied to several regional aeromagnetic and gravity images from southern Saskatchewan and Manitoba. The approach is combined with 2-D empirical mode decomposition allowing pattern analysis at variable spatial scales. The results show that skeletonization helps identifying complex geologic structures and measuring their quantitative attributes that are not available from conventional interpretation. Applications of this approach to interpretation of AVO attributes are discussed

    On the learning of vague languages for syntactic pattern recognition

    Get PDF
    The method of the learning of vague languages which represent distorted/ambiguous patterns is proposed in the paper. The goal of the method is to infer the quasi-context-sensitive string grammar which is used in our model as the generator of patterns. The method is an important component of the multi-derivational model of the parsing of vague languages used for syntactic pattern recognition

    Cognitive Information Processing

    Get PDF
    Contains research objectives, summary of research and reports on four research projects.National Institutes of Health (Grant 5 PO1 GM14940-05)National Institutes of Health (Grant 3 POI1 GM15006-03S2)Joint Services Electronics Programs (U. S. Army, U. S. Navy, and U. S. Air Force) under Contract DAAB07-71-C-0300World Health Organization (Grant R/00348)Grant from the Associated Pres

    Earth Resources: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 475 reports, articles and other documents introduced into the NASA scientific and technical information system between January 1 and March 31, 1984. Emphasis is placed on the use of remote sensing and geophysical instrumentation in spacecraft and aircraft to survey and inventory natural resources and urban areas. Subject matter is grouped according to agriculture and forestry, environmental changes and cultural resources, geodesy and cartography, geology and mineral resources, hydrology and water management, data processing and distribution systems, instrumentation and sensors, and economical analysis

    Statistical and deep learning methods for geoscience problems

    Get PDF
    Machine learning is the new frontier for technology development in geosciences and has developed extremely fast in the past decade. With the increased compute power provided by distributed computing and Graphics Processing Units (GPUs) and their exploitation provided by machine learning (ML) frameworks such as Keras, Pytorch, and Tensorflow, ML algorithms can now solve complex scientific problems. Although powerful, ML algorithms need to be applied to suitable problems conditioned for optimal results. For this reason ML algorithms require not only a deep understanding of the problem but also of the algorithm’s ability. In this dissertation, I show that Simple statistical techniques can often outperform ML-based models if applied correctly. In this dissertation, I show the success of deep learning in addressing two difficult problems. In the first application I use deep learning to auto-detect the leaks in a carbon capture project using pressure field data acquired from the DOE Cranfield site in Mississippi. I use the history of pressure, rates, and cumulative injection volumes to detect leaks as pressure anomaly. I use a different deep learning workflow to forecast high-energy electrons in Earth’s outer radiation belt using in situ measurements of different space weather parameters such as solar wind density and pressure. I focus on predicting electron fluxes of 2 MeV and higher energy and introduce the ensemble of deep learning models to further improve the results as compared to using a single deep learning architecture. I also show an example where a carefully constructed statistical approach, guided by the human interpreter, outperforms deep learning algorithms implemented by others. Here, the goal is to correlate multiple well logs across a survey area in order to map not only the thickness, but also to characterize the behavior of stacked gamma ray parasequence sets. Using tools including maximum likelihood estimation (MLE) and dynamic time warping (DTW) provides a means of generating quantitative maps of upward fining and upward coarsening across the oil field. The ultimate goal is to link such extensive well control with the spectral attribute signature of 3D seismic data volumes to provide a detailed maps of not only the depositional history, but also insight into lateral and vertical variation of mineralogy important to the effective completion of shale resource plays

    Stratigraphic interpretation of Well-Log data of the Athabasca Oil Sands of Alberta Canada through Pattern recognition and Artificial Intelligence

    Get PDF
    Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.Automatic Stratigraphic Interpretation of Oil Sand wells from well logs datasets typically involve recognizing the patterns of the well logs. This is done through classification of the well log response into relatively homogenous subgroups based on eletrofacies and lithofacies. The electrofacies based classification involves identifying clusters in the well log response that reflect ‘similar’ minerals and lithofacies within the logged interval. The identification of lithofacies relies on core data analysis which can be expensive and time consuming as against the electrofacies which are straight forward and inexpensive. To date, challenges of interpreting as well as correlating well log data has been on the increase especially when it involves numerous wellbore that manual analysis is almost impossible. This thesis investigates the possibilities for an automatic stratigraphic interpretation of an Oil Sand through statistical pattern recognition and rule-based (Artificial Intelligence) method. The idea involves seeking high density clusters in the multivariate space log data, in order to define classes of similar log responses. A hierarchical clustering algorithm was implemented in each of the wellbores and these clusters and classifies the wells in four classes that represent the lithologic information of the wells. These classes known as electrofacies are calibrated using a developed decision rules which identify four lithology -Sand, Sand-shale, Shale-sand and Shale in the gamma ray log data. These form the basis of correlation to generate a subsurface model

    Estimation and Detection

    Get PDF

    Course Description

    Get PDF

    Seismic Applications of Interactive Computational Methods

    Get PDF
    Effective interactive computing methods are needed in a number of specific areas of geophysical interpretation, even though the basic algorithms have been established. One approach to raise the quality of interpretation is to promote better interaction between human and the computer. The thesis is concerned with improving this dialog in three areas: automatic event picking, data visualization and sparse data imaging. Fully automatic seismic event picking methods work well in relatively good conditions. They collapse when the signal-to-noise ratio is low and the structure of the subsurface is complex. The interactive seismic event picking system described here blends the interpreter's guidance and judgment into the computer program, as it can bring the user into the loop to make subjective decisions when the picking problem is complicated. Several interactive approaches for 2-D event picking and 3-D horizon tracking have been developed. Envelope (or amplitude) threshold detection for first break picking is based on the assumption that the power of the signal is larger than that of the noise. Correlation and instantaneous phase pickers are designed for and better suited to picking other arrivals. The former is based on the cross-correlation function, and a model trace (or model traces) selected by the interpreter is needed. The instantaneous phase picker is designed to track spatial variations in the instantaneous phase of the analytic form of the arrival. The picking options implemented into the software package SeisWin were tested on real data drawn from many sources, such as full waveform sonic borehole logs, seismic reflection surveys and borehole radar profiles, as well as seven of the most recent 3-D seismic surveys conducted over Australian coal mines. The results show that the interactive picking system in SeisWin is efficient and tolerant. The 3-D horizon tracking method developed especially attracts industrial users. The visualization of data is also a part of the study, as picking accuracy, and indeed the whole of seismic interpretation depends largely on the quality of the final display. The display is often the only window through which an interpreter can see the earth's substructures. Display is a non-linear operation. Adjustments made to meet display deficiencies such as automatic gain control (AGC) have an important and yet ill-documented effect on the performance of pattern recognition operators, both human and computational. AGC is usually implemented in one dimension. Some of the tools in wide spread use for two dimensional image processing which are of great value in the local gain control of conventional seismic sections such as edge detectors, histogram equalisers, high-pass filters, shaded relief are discussed. Examples are presented to show the relative effectiveness of various display options. Conventional migration requires dense arrays with uniform coverage and uniform illumination of targets. There are, however, many instances in which these ideals can not be approached. Event migration and common tangent plane stacking procedures were developed especially for sparse data sets as a part of the research effort underlying this thesis. Picked-event migration migrates the line between any two points on different traces on the time section to the base map. The interplay between the space and time domain gives the interpreter an immediate view of mapping. Tangent plane migration maps the reflector by accumulating the energy from any two possible reflecting points along the common tangent lines on the space plane. These methods have been applied to both seismic and borehole-radar data and satisfactory results have been achieved
    corecore