7,296 research outputs found

    Gaussian Process Regression for Prediction of Sulfate Content in Lakes of China

    Get PDF
    In recent years, environmental pollution has become more and more serious, especially water pollution. In this study, the method of Gaussian process regression was used to build a prediction model for the sulphate content of lakes using several water quality variables as inputs. The sulphate content and other variable water quality data from 100 stations operated at lakes along the middle and lower reaches of the Yangtze River were used for developing the four models. The selected water quality data, consisting of water temperature, transparency, pH, dissolved oxygen conductivity, chlorophyll, total phosphorus, total nitrogen and ammonia nitrogen, were used as inputs for several different Gaussian process regression models. The experimental results showed that the Gaussian process regression model using an exponential kernel had the smallest prediction error. Its mean absolute error (MAE) of 5.0464 and root mean squared error (RMSE) of 7.269 were smaller than those of the other three Gaussian process regression models. By contrast, in the experiment, the model used in this study had a smaller error than linear regression, decision tree, support vector regression, Boosting trees, Bagging trees and other models, making it more suitable for prediction of the sulphate content in lakes. The method proposed in this paper can effectively predict the sulphate content in water, providing a new kind of auxiliary method for water detection

    Neural network ensembles: Evaluation of aggregation algorithms

    Get PDF
    Ensembles of artificial neural networks show improved generalization capabilities that outperform those of single networks. However, for aggregation to be effective, the individual networks must be as accurate and diverse as possible. An important problem is, then, how to tune the aggregate members in order to have an optimal compromise between these two conflicting conditions. We present here an extensive evaluation of several algorithms for ensemble construction, including new proposals and comparing them with standard methods in the literature. We also discuss a potential problem with sequential aggregation algorithms: the non-frequent but damaging selection through their heuristics of particularly bad ensemble members. We introduce modified algorithms that cope with this problem by allowing individual weighting of aggregate members. Our algorithms and their weighted modifications are favorably tested against other methods in the literature, producing a sensible improvement in performance on most of the standard statistical databases used as benchmarks.Comment: 35 pages, 2 figures, In press AI Journa

    A Taxonomy of Big Data for Optimal Predictive Machine Learning and Data Mining

    Full text link
    Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham's razor non plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.Comment: 18 pages, 2 figures 3 table

    A scale-space approach with wavelets to singularity estimation

    Get PDF
    This paper is concerned with the problem of determining the typical features of a curve when it is observed with noise. It has been shown that one can characterize the Lipschitz singularities of a signal by following the propagation across scales of the modulus maxima of its continuous wavelet transform. A nonparametric approach, based on appropriate thresholding of the empirical wavelet coefficients, is proposed to estimate the wavelet maxima of a signal observed with noise at various scales. In order to identify the singularities of the unknown signal, we introduce a new tool, "the structural intensity", that computes the "density" of the location of the modulus maxima of a wavelet representation along various scales. This approach is shown to be an effective technique for detecting the significant singularities of a signal corrupted by noise and for removing spurious estimates. The asymptotic properties of the resulting estimators are studied and illustrated by simulations. An application to a real data set is also proposed

    COMET: A Recipe for Learning and Using Large Ensembles on Massive Data

    Full text link
    COMET is a single-pass MapReduce algorithm for learning on large-scale data. It builds multiple random forest ensembles on distributed blocks of data and merges them into a mega-ensemble. This approach is appropriate when learning from massive-scale data that is too large to fit on a single machine. To get the best accuracy, IVoting should be used instead of bagging to generate the training subset for each decision tree in the random forest. Experiments with two large datasets (5GB and 50GB compressed) show that COMET compares favorably (in both accuracy and training time) to learning on a subsample of data using a serial algorithm. Finally, we propose a new Gaussian approach for lazy ensemble evaluation which dynamically decides how many ensemble members to evaluate per data point; this can reduce evaluation cost by 100X or more
    corecore