26 research outputs found

    High-precision calculation of gas saturation in organic shale pores using an intelligent fusion algorithm and a multi-mineral model

    Get PDF
     Shale gas reservoirs have been the subject of intensifying research in recent years. In particular, gas saturation has received considerable attention as a key parameter reflecting the gas-bearing properties of reservoirs. However, no mature model exists for calculating the saturation of shale gas reservoirs due to the difficulty in calculating the gas saturation. This paper proposes a new gas saturation prediction method that combines model-driven and data-driven approaches. A multi-mineral petrophysical model is applied to derive the apparent saturation model. Using the calculated apparent saturation, matrix parameters and porosity curve as inputs, an intelligent fusion algorithm composed of five regression algorithms is employed to predict the gas saturation. The gas saturation prediction results in the Yongchuan block, Sichuan Basin, reveal that the model proposed in this paper boasts good reliability and a greatly improved prediction accuracy. The proposed model can greatly assist in calculating the gas saturation of shale gas reservoirs.Cited as: Zhu, L., Zhang, C., Zhang, Z., Zhou, X. High-precision calculation of gas saturation in organic shale pores using an intelligent fusion algorithm and a multi-mineral model. Advances in Geo-Energy Research, 2020, 4(2): 135-151, doi: 10.26804/ager.2020.02.0

    Woodford Shale enclosed mini-basin fill on the Hunton Paleo Shelf. A depositional model for unconventional resource shales

    Get PDF
    The exploration of unconventional hydrocarbon resources of the Woodford Shale in Oklahoma (USA) has focused on characterizing this formation as an entirely open marine deposit. The impact of recognizing the enclosed mini-basin fill settings remains under-explored. To better understand these effects, I propose a detailed integrated study to highlight how these depositional variations occur. It is necessary to perform a workflow that involves multidisciplinary integration of geological, geochemical (both organic and inorganic) and geophysical characterizations to identify the characteristics of these deposits, how they vary vertically in the stratigraphic section of the Woodford Shale (internal variations in organic matter content and type; variability of the major heavy elements; and differences in mineralogy), and how they are laterally dissimilar by analyzing and comparing different Woodford locations in the Oklahoman petroleum provinces. The enclosed mini-basin fill settings occur locally in areas of thicker (gross thickness greater than 200 ft) and more organic-rich Woodford Shale (greater than 5.5 % on average of total organic carbon TOC). By understanding the context of regional sea-level fluctuations in the Upper Devonian time, it is observed that the Woodford Shale is deposited upon a pre-existent carbonate platform, where this platform was previously eroded by karstification or incised valley development during regional sea level drops at the pre-Woodford time. These karst/incised valley-forming processes formed a regional erosional unconformity, which allowed the development of sinkholes, pockets, and pods with more accommodation space for Woodford Shale sediment deposition in enclosed mini-basin fill settings. These erosional unconformities can be identified in outcrops, cores, well logs, and on 3D seismic data sets. I propose that the localized and discontinuous enclosed mini-basin fills settings represented silled constricted oceanic circulation with higher bottom-water euxinia (high free sulfur), which had better conditions for accumulation and preservation of clay and organic matter particles than did the well-circulated, open marine settings. I interpret that these depositional differences provide recognizable patterns in bed thickness and organic matter variations inside the Woodford Shale. I propose that areas in Oklahoma with thicker Woodford enclosed mini-basin fill settings are stratigraphical variations that could economically produce more oil and gas than other areas deposited under more open marine conditions or thinner enclosed mini-basin fill intervals. I capture these intervals by determining which ones contain more organic matter, more hydrogen, lower oxygen, more amorphous organic matter (more oil-prone than gas prone), the differences in paleo water chemistry (water column stratification, higher water salinity, higher levels of anoxia and euxinia). I recognize that these enclosed mini-basin fill geochemical characteristics are combined with the identification of enrichments in detrital quartz and relatively high depletions in the clay content of the lithofacies. The enclosed mini-basin fill deposits not only accumulate more organic matter but present different petrophysical and mechanical characteristics that, when modeled, simulated and compared with reported production, recover higher volumes of hydrocarbons under the standard unconventional petroleum industry operational practices

    Sustainable Agriculture and Advances of Remote Sensing (Volume 1)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publishing the results, among others

    5th International Conference on Advanced Research Methods and Analytics (CARMA 2023)

    Full text link
    Research methods in economics and social sciences are evolving with the increasing availability of Internet and Big Data sources of information. As these sources, methods, and applications become more interdisciplinary, the 5th International Conference on Advanced Research Methods and Analytics (CARMA) is a forum for researchers and practitioners to exchange ideas and advances on how emerging research methods and sources are applied to different fields of social sciences as well as to discuss current and future challenges.Martínez Torres, MDR.; Toral Marín, S. (2023). 5th International Conference on Advanced Research Methods and Analytics (CARMA 2023). Editorial Universitat Politècnica de València. https://doi.org/10.4995/CARMA2023.2023.1700

    Spatial Analysis for Landscape Changes

    Get PDF
    Recent increasing trends of the occurrence of natural and anthropic processes have a strong impact on landscape modification, and there is a growing need for the implementation of effective instruments, tools, and approaches to understand and manage landscape changes. A great improvement in the availability of high-resolution DEMs, GIS tools, and algorithms of automatic extraction of landform features and change detections has favored an increase in the analysis of landscape changes, which became an essential instrument for the quantitative evaluation of landscape changes in many research fields. One of the most effective ways of investigating natural landscape changes is the geomorphological one, which benefits from recent advances in the development of digital elevation model (DEM) comparison software and algorithms, image change detection, and landscape evolution models. This Special Issue collects six papers concerning the application of traditional and innovative multidisciplinary methods in several application fields, such as geomorphology, urban and territorial systems, vegetation restoration, and soil science. The papers include multidisciplinary studies that highlight the usefulness of quantitative analyses of satellite images and UAV-based DEMs, the application of Landscape Evolution Models (LEMs) and automatic landform classification algorithms to solve multidisciplinary issues of landscape changes. A review article is also presented, dealing with the bibliometric analysis of the research topic

    Ultrasonic measurements and machine learning methods to monitor industrial processes

    Get PDF
    The process manufacturing sector is increasingly using the collection and analysis of data to improve productivity, sustainability, and product quality. The endpoint of this transformation is processes that automatically adapt to demands in real-time. In-line and on-line sensors underpin this transition by automatically collecting the real-time data required to inform decision-making. Each sensing technique possesses its own advantages and disadvantages making them suitable for specific applications. Therefore, a wide range of sensing solutions must be developed to monitor the diverse and often highly variable operations in process manufacturing. Ultrasonic (US) sensors measure the interaction of mechanical waves with materials. They have benefits of being in-line, real-time, non-destructive, low in cost, small in size, able to monitor opaque materials, and can be applied non-invasively. Machine Learning (ML) is the use of computer algorithms to learn patterns in data to perform a task such as making predictions or decisions. The correlations in the data that the ML models learn during training have not been explicitly programmed by human operators. Therefore, ML is used to automatically learn from and analyse data. There are four main types of ML: supervised, unsupervised, semi-supervised, and reinforcement learning. Supervised and unsupervised ML are both used in this thesis. Supervised ML maps inputs to outputs during training with the aim being to create a model that accurately predicts the outputs of data that was not previously used during training. In contrast, unsupervised learning only uses input data in which patterns are discovered. Supervised ML is being increasingly combined with sensor measurements as it offers several distinct advantages over conventional calibration methods, these include: reduced time for development, potential for more accurate fitting, methods to encourage generalisation across parameter ranges, direct correlations to important process information rather than material properties, and ability for continuous retraining as more data becomes available. The aim of this thesis was to develop ML methods to facilitate the optimal deployment of US sensors for process monitoring applications in industrial environments. To achieve this, the thesis evaluates US sensing techniques and ML methods across three types of process manufacturing operations: material mixing, cleaning of pipe fouling, and alcoholic fermentation of beer. Two US sensing techniques were investigated: a non-invasive, reflection-mode technique, and a transmission-based method using an invasive US probe with reflector plate. The non-invasive, reflection-mode technique is more amenable to industrial implementation than the invasive probe given it can be externally retrofitted to existing vessels. Different feature extraction and feature selection methods, algorithms, and hyperparameter ranges were explored to determine the optimal ML pipeline for process monitoring using US sensors. This facilitates reduced development time of US sensor and ML combinations when deployed in industrial settings by recommending a pipeline that has been trialled over a range of process monitoring applications. Furthermore, methods to leverage previously collected datasets were developed to negate or reduce the burden of collecting labelled data (the outputs required during ML model training and often acquired by using reference measurements) for every new process monitoring application. These included unlabelled and labelled domain adaptation approaches. Both US sensing techniques investigated were found to be similarly accurate for process monitoring. To monitor the development of homogeneity during the blending of honey and water the non-invasive, reflection-mode technique achieved up to 100 % accuracy to classify whether the materials were mixed or non-mixed and an R2 of 0.977 to predict the time remaining (or time since) complete mixing was achieved. To monitor the structural changes during the mixing of flour and water, the aformentioned sensing method achieved an accuracy of 92.5 % and an R2 of 0.968 for the same classification and regression tasks. Similarly, the sensing method achieved an accuracy of up to 98.2 % when classifying whether fouling had been removed from pipe sections and R2 values of up 0.947 were achieved when predicting the time remaining until mixing was complete. The non-invasive, reflection-mode method also achieved R2 values of 0.948, Mean Squared Error (MSE) values of 0.283, and Mean Absolute Error (MAE) values of 0.146 to predict alcohol by volume percentage of alcohol during beer fermentation. In comparison, the transmission-based sensing method achieved R2 values of 0.952, MSE values of 0.265, and MAE values of 0.136 for the same task. Furthermore, the transmission-based method achieved accuracies of up to 99.8 % and 99.9 % to classify whether ethanol production had started and whether ethanol production had finished during an industrial beer fermentation process. The material properties that affect US wave propagation are strongly temperature dependent. However, ML models that omitted the process temperature were comparable in accuracy to those which included it as an input. For example, when monitoring laboratory scale fermentation processes, the highest performing models using the process temperature as a feature achieved R2 values of 0.952, MSE values of 0.265, and MAE values of 0.136 to predict the current alcohol concentration, compared with R2 values of 0.948, MSE values of 0.283, and MAE values of 0.146 when omitting the temperature. Similarly, when transferring models between mixing processes, accuracies of 92.2 % and R2 values of 0.947 were achieved when utilising the process temperature compared with 92.1% and 0.942 when omitting the temperature. When transferring models between cleaning processes, inclusion of the process temperature as a feature degraded model accuracy during classification tasks as omitting the temperature produced the highest accuracies for 6 out of 8 tasks. Mixed results were obtained for regression tasks where including the process temperature increased model accuracy for 3 out of 8 tasks. Overall, these results indicate that US sensing, for some applications, is able to achieve comparable accuracy when the process temperature is not available. The choice of whether to include the temperature as a feature should be made during the model validation stage to determine whether it improves prediction accuracy. The optimal feature extraction, feature selection, and ML algorithm permutation was determined as follows: Features were extracted by Convolutional Neural Networks (CNNs) followed by Principal Component Analysis (PCA) and inputted into deep neural networks with Long Short-Term Memory (LSTM) layers. The CNN was pre-trained on an auxiliary task using previously collected US datasets to learn features of the waveforms. The auxiliary task was to classify the dataset from which each US waveform originated. PCA was applied to reduce the dimensionality of the input data and enable the use of additional features, such as the US time of flight or measures of variation between consecutively acquired waveforms. This CNN and PCA feature extraction method was shown to produce more informative features from the US waveform compared to a traditional, coarse feature extraction approach, achieving higher accuracy on 65 % of tasks evaluated. The coarse feature method used commonly extracted parameters from US waveforms such as the energy, standard deviation, and skewness. LSTM units were used to learn the trajectory of the process features and so enable the use of information from previous timesteps to inform model prediction. Using LSTM units was shown to outperform neural networks with feature gradients used as inputs to incorporate information from previous timesteps for all process monitoring applications. Multi-task learning also showed improvements in learning feature trajectories and model accuracy (improving regression accuracy for 8 out of 18 tasks), however, at the expense of a greater number of hyperparameters to optimise. The choice to use multi-task learning should be evaluated during the validation stage of model development. Unlabelled and labelled domain adaptation were investigated to transfer ML knowledge between similar processes. Unlabelled domain adaptation was used to transfer trained ML models between similar mixing and similar cleaning processes to negate the need to collect labelled data for a new task. Transfer Component Analysis was compared to a Single Feature transfer method. Transferring a single feature was found to be optimal, achieving classification accuracies of up to 96.0% and 98.4% to predict whether the mixing or cleaning processes were complete and R2 of up to 0.947 and 0.999 to predict the time remaining for each process, respectively. The Single Feature method was most accurate as it was most representative of the changing material properties at the sensor measurement area. Training ML models across a greater process parameter range (a greater range of temperatures; 19.3 to 22.1°C compared with 19.8 to 21.2°C) or multiple datasets improved transfer learning to further datasets by enabling the models to adapt to a wider range of feature distributions. Labelled domain adaptation increased model accuracy on an industrial fermentation dataset by transferring ML knowledge from a laboratory fermentation dataset. Federated learning was investigated to maintain dataset privacy when applying transfer learning between datasets. The federated learning methodology performed better than the other methods tested, achieving higher accuracy for 14 out of 16 machine learning tasks compared with the base case model which was trained using data solely from the industrial fermentation. This was attributed to federated learning improving the gradient descent operation during network optimisation. During the federated learning training strategy, the local models were trained for a full epoch on each dataset before network weights were sent to the global model. In contrast, during the non-federated learning strategy, batches from each dataset were interspersed. Therefore, it is recommended that the order that the data is passed to the model during training should be evaluated during the validation stage. Overall, there are two main contributions from this thesis: Development of the ML pipeline for process monitoring using US sensors, and the development of unlabelled and labelled domain adaptation methods for process monitoring using US sensors. The development of an ML pipeline facilitates reduced time for the deployment of US sensor and ML combinations in industrial settings by recommending a method that has been trialled over a range of process monitoring applications. The unlabelled and labelled domain adaptation methods were developed to leverage previously collected datasets. This negates or reduces the burden of collecting labelled data in industrial environments. Furthermore, the pipeline and domain adaptation methodologies are evaluated using a non-invasive, reflection-mode US sensing technique. This technique is industrially relevant as it can be externally retrofitted onto existing process equipment. The novelty contained within this thesis can be summarised as follows: • The use of CNNs and LSTM layers for process monitoring using US sensor data: CNNs were used to extract spatial-invariant features from US sensor data to overcome problems of features shifting in the time domain due to changes in temperature or sound velocity. LSTM units were used for their ability to analyse sequences and understand temporal dependencies, critical for monitoring processes that develop over time. Feature extraction using CNNs was shown to produce more informative features from the US waveform compared to a traditional, coarse feature extraction approach, achieving higher accuracy on 65 % of tasks evaluated. LSTM units were shown to outperform neural networks with feature gradients used as inputs to incorporate information from previous timesteps for all process monitoring applications. • Evaluating the omission of the process temperature as a feature for process monitoring using US sensor data: This indicates whether the US sensor and ML combinations could be used in industrial applications where measurement of the process temperature is not available. Overall, it was found that ML models which omitted the process temperature were comparable in accuracy to those which included it as an input (for example, R2 values of 0.952, MSE values of 0.265, and MAE values of 0.136 when including temperature compared with R2 values of 0.948, MSE values of 0.283, and MAE values of 0.146 were obtained when omitting the temperature to predict the current alcohol concentration during laboratory scale fermentation processes). • The use of labelled and unlabelled domain adaptation for US data for process monitoring: Unlabelled domain adaptation was used to transfer trained ML models between similar mixing and similar cleaning processes to negate the need to collect labelled data for a new task. Labelled domain adaptation increased model accuracy on an industrial fermentation dataset by transferring ML knowledge from a laboratory fermentation dataset. • The use of labelled and unlabelled domain adaptation on features extracted from US waveforms: This allows the domain adaptation methods to be used for diverse US waveforms as, instead of aligning the US sensor data, the US waveform features are used which provide information about the process being monitored as they develop over time. • The use of federated learning and multi-task learning with US data: Federated learning was investigated to maintain dataset privacy when applying transfer learning between datasets. Multi-task learning was investigated to aid LSTM unit learning of the process trajectory. The federated learning methodology performed better than the other methods tested, achieving higher accuracy for 14 out of 16 ML tasks compared with the base case model. Multi-task learning also showed improvements in learning feature trajectories and model accuracy (improving regression accuracy for 8 out of 18 tasks evaluated), however, at the expense of a greater number of hyperparameters to optimise. • The use of data augmentation for US data for process monitoring applications: Data augmentation was a component of the convolutional feature extraction method developed in this thesis. Data augmentation artificially increased the dataset size to train the convolutional feature extractor while ensuring that features specific to each waveform, rather than the position or magnitude of features, were learned. This improved the feature-learning auxiliary task the CNN was trained to perform which classified the dataset from which each previously collected US waveform originated

    Sense and Respond

    Get PDF
    Over the past century, the manufacturing industry has undergone a number of paradigm shifts: from the Ford assembly line (1900s) and its focus on efficiency to the Toyota production system (1960s) and its focus on effectiveness and JIDOKA; from flexible manufacturing (1980s) to reconfigurable manufacturing (1990s) (both following the trend of mass customization); and from agent-based manufacturing (2000s) to cloud manufacturing (2010s) (both deploying the value stream complexity into the material and information flow, respectively). The next natural evolutionary step is to provide value by creating industrial cyber-physical assets with human-like intelligence. This will only be possible by further integrating strategic smart sensor technology into the manufacturing cyber-physical value creating processes in which industrial equipment is monitored and controlled for analyzing compression, temperature, moisture, vibrations, and performance. For instance, in the new wave of the ‘Industrial Internet of Things’ (IIoT), smart sensors will enable the development of new applications by interconnecting software, machines, and humans throughout the manufacturing process, thus enabling suppliers and manufacturers to rapidly respond to changing standards. This reprint of “Sense and Respond” aims to cover recent developments in the field of industrial applications, especially smart sensor technologies that increase the productivity, quality, reliability, and safety of industrial cyber-physical value-creating processes

    Machine learning applications for seismic processing and interpretation

    Get PDF
    During the past few years, exploration seismology has increasingly made use of machine learning algorithms in several areas including seismic data processing, attribute analysis, and computer aided interpretation. Since machine learning is a data-driven method for problem solving, it is important to adopt data which have good quality with minimal bias. Hidden variables and an appropriate objective function also need to be considered. In this dissertation, I focus my research on adapting machine learning algorithms that have been successfully applied to other scientific analysis problems to seismic interpretation and seismic data processing. Seismic data volumes can be extremely large, containing Gigabytes to Terrabytes of information. Add to these volumes the rich choice of seismic attributes, each of which has its own strengths in expressing geologic patterns, and the problem grows larger still. Seismic interpretation involves picking faults and horizons and identifying geologic features by their geometry, morphology, and amplitude patterns seen on seismic data. For the seismic facies classification task, I tested multiple attributes as input and built an attribute subset that can best differentiate the salt, mass transport deposits (MTDs), and conformal reflector seismic patterns using a suite of attribute selection algorithms. The resulting attribute subset differentiates the three classes with high accuracy and has the benefit of reducing the dimensionality of the data. To maximize the use of unlabeled data as well as labeled data, I provide a workflow for facies classification based on a semi-supervised learning approach. Compared to using only labeled data, I find that the addition of unlabeled data for learning results in higher performance of classification.. In seismic processing, I propose a deep learning approach for random and coherent noise attenuation in the frequency – space domain. I find that the deep ResNet architecture speeds up the process of denoising and improves the accuracy, which efficiently separates the noise from signals. Finally, I review geophysical inversion and machine learning approaches in an aspect of solving inverse problems and show similarities and differences of these approaches in both mathematical formulation and numerical tests

    Computer Vision Approaches to Liquid-Phase Transmission Electron Microscopy

    Get PDF
    Electron microscopy (EM) is a technique that exploits the interaction between electron and matter to produce high resolution images down to atomic level. In order to avoid undesired scattering in the electron path, EM samples are conventionally imaged in solid state under vacuum conditions. Recently, this limit has been overcome by the realization of liquid-phase electron microscopy (LP EM), a technique that enables the analysis of samples in their liquid native state. LP EM paired with a high frame rate acquisition direct detection camera allows tracking the motion of particles in liquids, as well as their temporal dynamic processes. In this research work, LP EM is adopted to image the dynamics of particles undergoing Brownian motion, exploiting their natural rotation to access all the particle views, in order to reconstruct their 3D structure via tomographic techniques. However, specific computer vision-based tools were designed around the limitations of LP EM in order to elaborate the results of the imaging process. Consequently, different deblurring and denoising approaches were adopted to improve the quality of the images. Therefore, the processed LP EM images were adopted to reconstruct the 3D model of the imaged samples. This task was performed by developing two different methods: Brownian tomography (BT) and Brownian particle analysis (BPA). The former tracks in time a single particle, capturing its dynamics evolution over time. The latter is an extension in time of the single particle analysis (SPA) technique. Conventionally it is paired to cryo-EM to reconstruct 3D density maps starting from thousands of EM images by capturing hundreds of particles of the same species frozen on a grid. On the contrary, BPA has the ability to process image sequences that may not contain thousands of particles, but instead monitors individual particle views across consecutive frames, rather than across a single frame
    corecore