1,789 research outputs found

    Application of Artificial Intelligence for Determining the Volume Percentages of a Stratified Regime’s Three-Phase Flow, Independent of the Oil Pipeline’s Scale Thickness

    Get PDF
    As time passes, scale builds up inside the pipelines that deliver the oil or gas product from the source to processing plants or storage tanks, reducing the inside diameter and ultimately wasting energy and reducing efficiency. A non-invasive system based on gamma-ray attenuation is one of the most accurate diagnostic methods to detect volumetric percentages in different conditions. A system including two NaI detectors and dual-energy gamma sources ( 241 Am and 133 Ba radioisotopes) is the recommended requirement for modeling a volume-percentage detection system using Monte Carlo N particle (MCNP) simulations. Oil, water, and gas form a three-phase flow in a stratified-flow regime in different volume percentages, which flows inside a scaled pipe with different thicknesses. Gamma rays are emitted from one side, and photons are absorbed from the other side of the pipe by two scintillator detectors, and finally, three features with the names of the count under Photopeaks 241 Am and 133 Ba of the first detector and the total count of the second detector were obtained. By designing two MLP neural networks with said inputs, the volumetric percentages can be predicted with an RMSE of less than 1.48 independent of scale thickness. This low error value guarantees the effectiveness of the intended method and the usefulness of using this approach in the petroleum and petrochemical industries

    Data-Driven Modeling and Prediction for Reservoir Characterization and Simulation Using Seismic and Petrophysical Data Analyses

    Get PDF
    This study explores the application of data-driven modeling and prediction in reservoir characterization and simulation using seismic and petrophysical data analyses. Different aspects of the application of data-driven modeling methods are studied, which include rock facies classification, seismic attribute analyses, petrophysical properties prediction, seismic facies segmentation, and reservoir dimension reduction. The application of using petrophysical well logs to predict rock facies is explored using different data analytics methods including decision tree, random forest, support vector machine and neural network. Different models are trained from a set of well logs and pre-interpreted rock facies data. Among the compared methods, the random forest method has the best performance in classifying rock facies in the dataset. Seismic attribute values from a 3D seismic survey and petrophysical properties from well logs are collected to explore the relationships between seismic data and well logs. In this study, deep learning neural network models are created to establish the relationships. The results show that a deep learning neural network model with multi-hidden layers is capable to predict porosity values using extracted seismic attribute values. The utilization of a set of seismic attributes improves the model performance in predicting porosity values from seismic data. This study also presents a novel deep learning approach to automatically identify salt bodies directly from seismic images. A wavelet convolutional neural network (Wavelet CNN) model, which combines wavelet transformation analyses with a traditional convolutional neural network (CNN), is developed and demonstrated to increase the accuracy in predicting salt boundaries from seismic images. The Wavelet CNN model outperforms the conventional image recognition techniques, providing higher accuracy, to identify salt bodies from seismic images. Besides, this study evaluates the effect of singular value decomposition (SVD) in dimension reduction of permeability fields during reservoir modeling. Reservoir simulation results show that SVD is valid in the parameterization of the permeability field. The reconstructed permeability fields after SVD processing are good approximations of the original permeability values. This study also evaluates the application of SVD on upscaling for reservoir modeling. Different upscaling schemes are applied on the permeability field, and their performance are evaluated using reservoir simulation

    Calibration Methods of Laser-Induced Breakdown Spectroscopy

    Get PDF
    Laser-induced breakdown spectroscopy (LIBS) has gained great attention over the past two decades due to its many advantages, such as needless sample preparation, capability of remote measurement and fast multielement simultaneous analysis. However, because of its inherent uncertainty features of plasma, it is still a big challenge for LIBS community worldwide to realize high sensitivity and accurate quantitative analysis. Currently, many chemometric analytical methods have been applied to LIBS calibration analysis, including univariate regression, multivariate regression, principal component regression (PCR), partial least squares regression (PLSR) and so on. In addition, appropriate sample and spectral pretreatment can effectively improve the analytical performance (i.e., limit of detection (LOD), accuracy and repeatability) of LIBS. In this chapter, we briefly summarize the progress of these calibration methods and their applications on LIBS and provide our recommendations

    Survey analysis for optimization algorithms applied to electroencephalogram

    Get PDF
    This paper presents a survey for optimization approaches that analyze and classify Electroencephalogram (EEG) signals. The automatic analysis of EEG presents a significant challenge due to the high-dimensional data volume. Optimization algorithms seek to achieve better accuracy by selecting practical features and reducing unwanted features. Forty-seven reputable research papers are provided in this work, emphasizing the developed and executed techniques divided into seven groups based on the applied optimization algorithm particle swarm optimization (PSO), ant colony optimization (ACO), artificial bee colony (ABC), grey wolf optimizer (GWO), Bat, Firefly, and other optimizer approaches). The main measures to analyze this paper are accuracy, precision, recall, and F1-score assessment. Several datasets have been utilized in the included papers like EEG Bonn University, CHB-MIT, electrocardiography (ECG) dataset, and other datasets. The results have proven that the PSO and GWO algorithms have achieved the highest accuracy rate of around 99% compared with other techniques

    Measurement uncertainty in machine learning - uncertainty propagation and influence on performance

    Get PDF
    Industry 4.0 is based on the intelligent networking of machines and processes in industry and makes a decisive contribution to increasing competitiveness. For this, reliable measurements of used sensors and sensor systems are essential. Metrology deals with the definition of internationally accepted measurement units and standards. In order to internationally compare measurement results, the Guide to the Expression of Uncertainty in Measurement (GUM) provides the basis for evaluating and interpreting measurement uncertainty. At the same time, measurement uncertainty also provides data quality information, which is important when machine learning is applied in the digitalized factory. However, measurement uncertainty in line with the GUM has been mostly neglected in machine learning or only estimated by cross-validation. Therefore, this dissertation aims to combine measurement uncertainty based on the principles of the GUM and machine learning. For performing machine learning, a data pipeline that fuses raw data from different measurement systems and determines measurement uncertainties from dynamic calibration information is presented. Furthermore, a previously published automated toolbox for machine learning is extended to include uncertainty propagation based on the GUM and its supplements. Using this uncertainty-aware toolbox, the influence of measurement uncertainty on machine learning results is investigated, and approaches to improve these results are discussed.Industrie 4.0 basiert auf der intelligenten Vernetzung von Maschinen und Prozessen und trĂ€gt zur Steigerung der WettbewerbsfĂ€higkeit entscheidend bei. ZuverlĂ€ssige Messungen der eingesetzten Sensoren und Sensorsysteme sind dabei unerlĂ€sslich. Die Metrologie befasst sich mit der Festlegung international anerkannter Maßeinheiten und Standards. Um Messergebnisse international zu vergleichen, stellt der Guide to the Expression of Uncertainty in Measurement (GUM) die Basis zur Bewertung von Messunsicherheit bereit. Gleichzeitig liefert die Messunsicherheit auch Informationen zur DatenqualitĂ€t, welche wiederum wichtig ist, wenn maschinelles Lernen in der digitalisierten Fabrik zur Anwendung kommt. Bisher wurde die Messunsicherheit im Bereich des maschinellen Lernens jedoch meist vernachlĂ€ssigt oder nur mittels Kreuzvalidierung geschĂ€tzt. Ziel dieser Dissertation ist es daher, Messunsicherheit basierend auf dem GUM und maschinelles Lernen zu vereinen. Zur DurchfĂŒhrung des maschinellen Lernens wird eine Datenpipeline vorgestellt, welche Rohdaten verschiedener Messsysteme fusioniert und Messunsicherheiten aus dynamischen Kalibrierinformationen bestimmt. Des Weiteren wird eine bereits publizierte automatisierte Toolbox fĂŒr maschinelles Lernen um Unsicherheitsfortpflanzungen nach dem GUM erweitert. Unter Verwendung dieser Toolbox werden der Einfluss der Messunsicherheit auf die Ergebnisse des maschinellen Lernens untersucht und AnsĂ€tze zur Verbesserung dieser Ergebnisse aufgezeigt
    • 

    corecore