429 research outputs found

    Bayesian network-based over-sampling method (BOSME) with application to indirect cost-sensitive learning

    Get PDF
    Traditional supervised learning algorithms do not satisfactorily solve the classification problem on imbalanced data sets, since they tend to assign the majority class, to the detriment of the minority class classification. In this paper, we introduce the Bayesian network-based over-sampling method (BOSME), which is a new over-sampling methodology based on Bayesian networks. Over-sampling methods handle imbalanced data by generating synthetic minority instances, with the benefit that classifiers learned from a more balanced data set have a better ability to predict the minority class. What makes BOSME different is that it relies on a new approach, generating artificial instances of the minority class following the probability distribution of a Bayesian network that is learned from the original minority classes by likelihood maximization. We compare BOSME with the benchmark synthetic minority over-sampling technique (SMOTE) through a series of experiments in the context of indirect cost-sensitive learning, with some state-of-the-art classifiers and various data sets, showing statistical evidence in favor of BOSME, with respect to the expected (misclassification) cost.The authors are supported by Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, project ref. PGC2018-097848-B-I0

    Development of new cost-sensitive Bayesian network learning algorithms

    Get PDF
    Bayesian networks are becoming an increasingly important area for research and have been proposed for real world applications such as medical diagnoses, image recognition, and fraud detection. In all of these applications, accuracy is not sufficient alone, as there are costs involved when errors occur. Hence, this thesis develops new algorithms, referred to as cost-sensitive Bayesian network algorithms that aim to minimise the expected costs due to misclassifications. The study presents a review of existing research on cost-sensitive learning and identifies three common methods for developing cost-sensitive algorithms for decision tree learning. These methods are then utilised to develop three different algorithms for learning cost-sensitive Bayesian networks: (i) an indirect method, where costs are included by changing the data distribution without changing a cost-insensitive algorithm; (ii) a direct method in which an existing cost-insensitive algorithm is altered to take account of cost; and (iii) by using Genetic algorithms to evolve cost-sensitive Bayesian networks.This research explores new algorithms, which are evaluated on 36 benchmark datasets and compared to existing cost-sensitive algorithms such as MetaCost+J48, and MetaCost+BN as well as an existing cost-insensitive Bayesian network algorithm. The obtained results exhibit improvements in comparison to other algorithms in terms of cost, whilst still maintaining accuracy. In our experiment methodology, all experiments are repeated with 10 random trials, and in each trial, the data divided into 75% for training and 25% for testing. The results show that: (i) all three new algorithms perform better than the cost-insensitive Bayesian learning algorithm on all 36 datasets in terms of cost; (ii) the new algorithms, which are based on indirect methods, direct methods, and Genetic algorithms, work better than MetaCost+J48 on 29, 28, and 31 out of the 36 datasets respectively in terms of cost; (iii) the algorithm that utilise an indirect method performs well on imbalanced data compared to our two algorithms on 8 out of the 36 datasets in terms of cost; (iv) the algorithm that is based on a direct method outperform the new algorithms on 13 out of 36 datasets in terms of cost; (v) the evolutionary version of the algorithm is better than the other algorithms, including the use of the direct and indirect methods, on 24 out of the 36 datasets in terms of both costs and accuracy; (vi) all three new algorithms perform better than the MetaCost+BN on all 36 datasets in terms of cost

    Practical Applications of Machine Learning to Underground Rock Engineering

    Get PDF
    Rock mechanics engineers have increasing access to large quantities of data from underground excavations as sensor technologies are developed, data storage becomes cheaper, and computational speed and power improve. Machine learning has emerged as a viable approach to process data for engineering decision making. This research investigates practical applications of machine learning algorithms (MLAs) to underground rock engineering problems using real datasets from a variety of rock mass deformation contexts. It was found that preserving the format of the original input data as much as possible reduces the introduction of bias during digitalization and results in more interpretable MLAs. A Convolutional Neural Network (CNN) is developed using a dataset from Cigar Lake Mine, Saskatchewan, Canada, to predict the tunnel liner yield class. Several hyperparameters are optimized: the amount of training data, the convolution filter size, and the error weighting scheme. Two CNN architectures are proposed to characterize the rock mass deformation: (i) a Global Balanced model that has a prediction accuracy >65% for all yield classes, and (ii) a Targeted Class 2/3 model that emphasizes the worst case yield and has a recall of >99% for Class 2. The interpretability of the CNN is investigated through three Input Variable Selection (IVS) methods. The three methods are Channel Activation Strength, Input Omission, and Partial Correlation. The latter two are novel methods proposed for CNNs using a spatial and temporal geomechanical dataset. Collectively, the IVS analyses indicate that all the available digitized inputs are needed to produce good CNN performances. A Long-Short Term Memory (LSTM) network is developed using a dataset for Garson Mine, near Sudbury, Ontario, Canada, to predict the stress state in a FLAC3D model. This is a novel method proposed to semi-automate calibration of finite-difference models of high-stress environments. A workflow for optimizing the hyperparameters of the LSTM network is proposed. The performance of the LSTM network predicting the three principal stresses is improved as compared to predicting the six-component stress tensor, with corrected Akaike Information Criterion (AICc) values of -59.62 and -45.50, respectively. General recommendations are made with respect to machine learning algorithm development for practical rock engineering problems, in terms of how to format and pre-process inputs, select architectures, tune hyperparameters, and determine engineering verification metrics. Recommendations are made to demonstrate how algorithms can be rendered interpretable with the application of tools that already exist in the field of machine learning

    Machine learning for the subsurface characterization at core, well, and reservoir scales

    Get PDF
    The development of machine learning techniques and the digitization of the subsurface geophysical/petrophysical measurements provides a new opportunity for the industries focusing on exploration and extraction of subsurface earth resources, such as oil, gas, coal, geothermal energy, mining, and sequestration. With more data and more computation power, the traditional methods for subsurface characterization and engineering that are adopted by these industries can be automized and improved. New phenomenon can be discovered, and new understandings may be acquired from the analysis of big data. The studies conducted in this dissertation explore the possibility of applying machine learning to improve the characterization of geological materials and geomaterials. Accurate characterization of subsurface hydrocarbon reservoirs is essential for economical oil and gas reservoir development. The characterization of reservoir formation requires the integration interpretation of data from different sources. Large-scale seismic measurements, intermediate-scale well logging measurements, and small-scale core sample measurements help engineers understand the characteristics of the hydrocarbon reservoirs. Seismic data acquisition is expensive and core samples are sparse and have limited volume. Consequently, well log acquisition provides essential information that improves seismic analysis and core analysis. However, the well logging data may be missing due to financial or operational challenges or may be contaminated due to complex downhole environment. At the near-wellbore scale, I solve the data constraint problem in the reservoir characterization by applying machine learning models to generate synthetic sonic traveltime and NMR logs that are crucial for geomechanical and pore-scale characterization, respectively. At the core scale, I solve the problems in fracture characterization by processing the multipoint sonic wave propagation measurements using machine learning to characterize the dispersion, orientation, and distribution of cracks embedded in material. At reservoir scale, I utilize reinforcement learning models to achieve automatic history matching by using a fast-marching-based reservoir simulator to estimate reservoir permeability that controls pressure transient response of the well. The application of machine learning provides new insights into traditional subsurface characterization techniques. First, by applying shallow and deep machine learning models, sonic logs and NMR T2 logs can be acquired from other easy-to-acquire well logs with high accuracy. Second, the development of the sonic wave propagation simulator enables the characterization of crack-bearing materials with the simple wavefront arrival times. Third, the combination of reinforcement learning algorithms and encapsulated reservoir simulation provides a possible solution for automatic history matching

    When in doubt ask the crowd : leveraging collective intelligence for improving event detection and machine learning

    Get PDF
    [no abstract

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Speech Recognition

    Get PDF
    Chapters in the first part of the book cover all the essential speech processing techniques for building robust, automatic speech recognition systems: the representation for speech signals and the methods for speech-features extraction, acoustic and language modeling, efficient algorithms for searching the hypothesis space, and multimodal approaches to speech recognition. The last part of the book is devoted to other speech processing applications that can use the information from automatic speech recognition for speaker identification and tracking, for prosody modeling in emotion-detection systems and in other speech processing applications that are able to operate in real-world environments, like mobile communication services and smart homes
    • …
    corecore