1,441 research outputs found

    Challenges and Issues on Artificial Hydrocarbon Networks: The Chemical Nature of Data-Driven Approaches

    Get PDF
    International audienceInspiration in nature has been widely explored, from macro to micro-scale. When looking into chemical phenomena, stability and organization are two properties that emerge. Recently, artificial hydrocarbon networks (AHN), a supervised learning method inspired in the inner structures and mechanisms of chemical compounds, have been proposed as a data-driven approach in artificial intelligence. AHN have been successfully applied in data-driven approaches, such as: regression and classification models, control systems, signal processing, and robotics. To do so, molecules –the basic units of information in AHN– play an important role in the stability, organization and interpretability of this method. Interpretability, saving computing resources, and predictability have been handled by AHN, as any other machine learning model. This short paper aims to highlight the challenges, issues and trends of artificial hydrocarbon networks as a data-driven method. Throughout this document, it presents a description of the main insights of AHN and the efforts to tackle interpretability and training acceleration. Potential applications and future trends on AHN are also discussed

    Hydrocarbon Pay zone Prediction using AI Neural Network Modeling.

    Get PDF
    This paper captures the ability of AI neural network technology to analyze petrophysical datasets for pattern recognition and accurate prediction of the pay zone of a vertical well from the Santa Fe field in Kansas. During this project, data from 10 completed wells in the Santa Fe field were gathered, resulting in a dataset with 25,580 records, ten predictors (logs data), and a single binary output (Yes or No) to identify the availability of Hydrocarbon over a half feet depth segment in the well. Several models composed of different predictors combinations were also tested to determine how impactful some logs were compared to others for the prediction process. With 32 tested models using a base set of 5 logs (X, Y GR, DEPT, and CALI) and different combinations of 5 other logs ( RT90, RHOB, NPHI, PE, DT). All models containing RT90, NP, or DT led to a better prediction matching the pay zone established based on a petrophysical analysis and completion data from the well. Results from this project could be used as another support to help and justify decision-making for a Petro physicist regarding work in the field with less experience

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances

    Research Naval Postgraduate School, v.4, no. 14, October 2012

    Get PDF
    NPS Research is published by the Research and Sponsored Programs, Office of the Vice President and Dean of Research, in accordance with NAVSOP-35. Views and opinions expressed are not necessarily those of the Department of the Navy.Approved for public release; distribution is unlimited

    Toward Interactive Music Generation: A Position Paper

    Get PDF
    Music generation using deep learning has received considerable attention in recent years. Researchers have developed various generative models capable of imitating musical conventions, comprehending the musical corpora, and generating new samples based on the learning outcome. Although the samples generated by these models are persuasive, they often lack musical structure and creativity. For instance, a vanilla end-to-end approach, which deals with all levels of music representation at once, does not offer human-level control and interaction during the learning process, leading to constrained results. Indeed, music creation is a recurrent process that follows some principles by a musician, where various musical features are reused or adapted. On the other hand, a musical piece adheres to a musical style, breaking down into precise concepts of timbre style, performance style, composition style, and the coherency between these aspects. Here, we study and analyze the current advances in music generation using deep learning models through different criteria. We discuss the shortcomings and limitations of these models regarding interactivity and adaptability. Finally, we draw the potential future research direction addressing multi-agent systems and reinforcement learning algorithms to alleviate these shortcomings and limitations

    Multi-Attribute Seismic Analysis Using Unsupervised Machine Learning Method: Self-Organizing Maps

    Get PDF
    Seismic attributes are a fundamental part of seismic interpretation and are routinely used by geoscientists to extract key information and visualize geological features. By combining different findings from each attribute, they can provide a good insight of the area and help overcome many geological challenges. However, individually analyzing multiple attributes to find relevant information can be time-consuming and inefficient, especially when working with large datasets. It can lead to miscalculations, errors in judgement and human bias. This is where Machine Learning (ML) methods can be implemented to improve existing interpretations or find additional information. ML can help by handling large volumes of multi-dimensional data and interrelating them. Methods such as Self Organizing Maps (SOM) allow multi-attribute analysis and help extract more information as compared to quantitative interpretation. SOM is an unsupervised neural network that can find meaningful and reliable patterns corresponding to a specific geological feature (Roden and Chen, 2017). The purpose of this thesis was to understand how SOM can help make interpretations of direct hydrocarbon indicators (DHI) in the Statfjord Field area easier. Several AVO attributes were generated to detect DHIs and were then used as input for multi-attribute SOM analysis. SOMPY package in Python was used to train the model and generate SOM classification results. Data samples were classified based on BMU hits and clusters in the data. The classification was then applied to the whole dataset and converted to seismic sections for comparison and interpretation. SOM classified seismic lines were compared with the results of the AVO attributes. Since DHIs are anomalous data, they were expected to be represented by small data clusters and BMUs with low hits. While SOM reproduced the seismic reflectors well, it did not define the DHI features clearly for them to be easily interpreted. Use of fewer seismic attributes and computational limitations of the machine could be some of the reasons behind not achieving desired results. However, the study has room for improvement and the potential to produce meaningful results. Improvements in model design and training, and also the selection of input attributes are some of the areas that need to be addressed. Furthermore, testing other Python libraries and better handling of large datasets can allow better performance and more accurate results

    Advanced photonic and electronic systems WILGA 2018

    Get PDF
    WILGA annual symposium on advanced photonic and electronic systems has been organized by young scientist for young scientists since two decades. It traditionally gathers around 400 young researchers and their tutors. Ph.D students and graduates present their recent achievements during well attended oral sessions. Wilga is a very good digest of Ph.D. works carried out at technical universities in electronics and photonics, as well as information sciences throughout Poland and some neighboring countries. Publishing patronage over Wilga keep Elektronika technical journal by SEP, IJET and Proceedings of SPIE. The latter world editorial series publishes annually more than 200 papers from Wilga. Wilga 2018 was the XLII edition of this meeting. The following topical tracks were distinguished: photonics, electronics, information technologies and system research. The article is a digest of some chosen works presented during Wilga 2018 symposium. WILGA 2017 works were published in Proc. SPIE vol.10445. WILGA 2018 works were published in Proc. SPIE vol.10808

    Machine learning for the subsurface characterization at core, well, and reservoir scales

    Get PDF
    The development of machine learning techniques and the digitization of the subsurface geophysical/petrophysical measurements provides a new opportunity for the industries focusing on exploration and extraction of subsurface earth resources, such as oil, gas, coal, geothermal energy, mining, and sequestration. With more data and more computation power, the traditional methods for subsurface characterization and engineering that are adopted by these industries can be automized and improved. New phenomenon can be discovered, and new understandings may be acquired from the analysis of big data. The studies conducted in this dissertation explore the possibility of applying machine learning to improve the characterization of geological materials and geomaterials. Accurate characterization of subsurface hydrocarbon reservoirs is essential for economical oil and gas reservoir development. The characterization of reservoir formation requires the integration interpretation of data from different sources. Large-scale seismic measurements, intermediate-scale well logging measurements, and small-scale core sample measurements help engineers understand the characteristics of the hydrocarbon reservoirs. Seismic data acquisition is expensive and core samples are sparse and have limited volume. Consequently, well log acquisition provides essential information that improves seismic analysis and core analysis. However, the well logging data may be missing due to financial or operational challenges or may be contaminated due to complex downhole environment. At the near-wellbore scale, I solve the data constraint problem in the reservoir characterization by applying machine learning models to generate synthetic sonic traveltime and NMR logs that are crucial for geomechanical and pore-scale characterization, respectively. At the core scale, I solve the problems in fracture characterization by processing the multipoint sonic wave propagation measurements using machine learning to characterize the dispersion, orientation, and distribution of cracks embedded in material. At reservoir scale, I utilize reinforcement learning models to achieve automatic history matching by using a fast-marching-based reservoir simulator to estimate reservoir permeability that controls pressure transient response of the well. The application of machine learning provides new insights into traditional subsurface characterization techniques. First, by applying shallow and deep machine learning models, sonic logs and NMR T2 logs can be acquired from other easy-to-acquire well logs with high accuracy. Second, the development of the sonic wave propagation simulator enables the characterization of crack-bearing materials with the simple wavefront arrival times. Third, the combination of reinforcement learning algorithms and encapsulated reservoir simulation provides a possible solution for automatic history matching

    Advancing Carbon Sequestration through Smart Proxy Modeling: Leveraging Domain Expertise and Machine Learning for Efficient Reservoir Simulation

    Get PDF
    Geological carbon sequestration (GCS) offers a promising solution to effectively manage extra carbon, mitigating the impact of climate change. This doctoral research introduces a cutting-edge Smart Proxy Modeling-based framework, integrating artificial neural networks (ANNs) and domain expertise, to re-engineer and empower numerical reservoir simulation for efficient modeling of CO2 sequestration and demonstrate predictive conformance and replicative capabilities of smart proxy modeling. Creating well-performing proxy models requires extensive human intervention and trial-and-error processes. Additionally, a large training database is essential to ANN model for complex tasks such as deep saline aquifer CO2 sequestration since it is used as the neural network\u27s input and output data. One major limitation in CCS programs is the lack of real field data due to a lack of field applications and issues with confidentiality. Considering these drawbacks, and due to high-dimensional nonlinearity, heterogeneity, and coupling of multiple physical processes associated with numerical reservoir simulation, novel research to handle these complexities as it allows for the creation of possible CO2 sequestration scenarios that may be used as a training set. This study addresses several types of static and dynamic realistic and practical field-base data augmentation techniques ranging from spatial complexity, spatio-temporal complexity, and heterogeneity of reservoir characteristics. By incorporating domain-expertise-based feature generation, this framework honors precise representation of reservoir overcoming computational challenges associated with numerical reservoir tools. The developed ANN accurately replicated fluid flow behavior, resulting in significant computational savings compared to traditional numerical simulation models. The results showed that all the ML models achieved very good accuracies and high efficiency. The findings revealed that the quality of the path between the focal cell and injection wells emerged as the most crucial factor in both CO2 saturation and pressure estimation models. These insights significantly contribute to our understanding of CO2 plume monitoring, paving the way for breakthroughs in investigating reservoir behavior at a minimal computational cost. The study\u27s commitment to replicating numerical reservoir simulation results underscores the model\u27s potential to contribute valuable insights into the behavior and performance of CO2 sequestration systems, as a complimentary tool to numerical reservoir simulation when there is no measured data available from the field. The transformative nature of this research has vast implications for advancing carbon storage modeling technologies. By addressing the computational limitations of traditional numerical reservoir models and harnessing the synergy between machine learning and domain expertise, this work provides a practical workflow for efficient decision-making in sequestration projects

    Evaluation of CO2 Injection in Shale Gas Reservoirs through Numerical Reservoir Simulation and Supervised Machine Learning

    Get PDF
    CO2 geological storage is an important means to decarbonize the economy, but also expensive in field applications. CO2 injection into shale gas reservoirs can significantly increase the economic incentive by enhancing shale gas production through CO2 preferential adsorption in shales. This work presents a practical framework to evaluate and predict the CO2 adsorption process and CH4 production in shales through a combination of numerical reservoir simulation and supervised machine learning
    • …
    corecore