83,053 research outputs found

    AI-driven Maintenance Support for Downhole Tools and Electronics Operated in Dynamic Drilling Environments

    Get PDF
    Downhole tools are complex electro-mechanical systems that perform critical functions in drilling operations. The electronics within these systems provide vital support, such as control, navigation and front-end data analysis from sensors. Due to the extremely challenging operating conditions, namely high pressure, temperature and vibrational forces, electronics can be subjected to complex failure modes and incur operational downtime. A novel Artificial Intelligence (AI)-driven Condition Based Maintenance (CBM) support system is presented, combining Bottom Hole Assembly (BHA) data with Big Data Analytics (BDA). The key objective of this system is to reduce maintenance costs along with an overall improvement of fleet reliability. As evidenced within the literature review, the application of AI methods to downhole tool maintenance is underrepresented in terms of oil and gas application. We review the BHA electronics failure modes and propose a methodology for BHA-Printed Component Board Assemblies (PCBA) CBM. We compare the results of a Random Forest Classifier (RFC) and a XGBoost Classifier trained on BHA electronics memory data cumulated during 208 missions over a 6 months period, achieving an accuracy of 90 % for predicting PCBA failure. These results are extended into a commercial analysis examining various scenarios of infield failure costs and fleet reliability levels. The findings of this paper demonstrate the value of the BHA-PCBA CBM framework by providing accurate prognosis of operational equipment health leading to reduced costs, minimised Non-Productive Time (NPT) and increased operational reliability

    Classification systems offer a microcosm of issues in conceptual processing: A commentary on Kemmerer (2016)

    Get PDF
    This is a commentary on Kemmerer (2016), Categories of Object Concepts Across Languages and Brains: The Relevance of Nominal Classification Systems to Cognitive Neuroscience, DOI: 10.1080/23273798.2016.1198819

    Deep learning for time series classification: a review

    Get PDF
    Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.Comment: Accepted at Data Mining and Knowledge Discover

    PASS: a simple classifier system for data analysis

    Get PDF
    Let x be a vector of predictors and y a scalar response associated with it. Consider the regression problem of inferring the relantionship between predictors and response on the basis of a sample of observed pairs (x,y). This is a familiar problem for which a variety of methods are available. This paper describes a new method based on the classifier system approach to problem solving. Classifier systems provide a rich framework for learning and induction, and they have been suc:cessfully applied in the artificial intelligence literature for some time. The present method emiches the simplest classifier system architecture with some new heuristic and explores its potential in a purely inferential context. A prototype called PASS (Predictive Adaptative Sequential System) has been built to test these ideas empirically. Preliminary Monte Carlo experiments indicate that PASS is able to discover the structure imposed on the data in a wide array of cases

    Neural network setups for a precise detection of the many-body localization transition: finite-size scaling and limitations

    Full text link
    Determining phase diagrams and phase transitions semi-automatically using machine learning has received a lot of attention recently, with results in good agreement with more conventional approaches in most cases. When it comes to more quantitative predictions, such as the identification of universality class or precise determination of critical points, the task is more challenging. As an exacting test-bed, we study the Heisenberg spin-1/2 chain in a random external field that is known to display a transition from a many-body localized to a thermalizing regime, which nature is not entirely characterized. We introduce different neural network structures and dataset setups to achieve a finite-size scaling analysis with the least possible physical bias (no assumed knowledge on the phase transition and directly inputing wave-function coefficients), using state-of-the-art input data simulating chains of sizes up to L=24. In particular, we use domain adversarial techniques to ensure that the network learns scale-invariant features. We find a variability of the output results with respect to network and training parameters, resulting in relatively large uncertainties on final estimates of critical point and correlation length exponent which tend to be larger than the values obtained from conventional approaches. We put the emphasis on interpretability throughout the paper and discuss what the network appears to learn for the various used architectures. Our findings show that a it quantitative analysis of phase transitions of unknown nature remains a difficult task with neural networks when using the minimally engineered physical input.Comment: v2: published versio
    corecore