53,028 research outputs found

    Sequential Design for Ranking Response Surfaces

    Full text link
    We propose and analyze sequential design methods for the problem of ranking several response surfaces. Namely, given L2L \ge 2 response surfaces over a continuous input space X\cal X, the aim is to efficiently find the index of the minimal response across the entire X\cal X. The response surfaces are not known and have to be noisily sampled one-at-a-time. This setting is motivated by stochastic control applications and requires joint experimental design both in space and response-index dimensions. To generate sequential design heuristics we investigate stepwise uncertainty reduction approaches, as well as sampling based on posterior classification complexity. We also make connections between our continuous-input formulation and the discrete framework of pure regret in multi-armed bandits. To model the response surfaces we utilize kriging surrogates. Several numerical examples using both synthetic data and an epidemics control problem are provided to illustrate our approach and the efficacy of respective adaptive designs.Comment: 26 pages, 7 figures (updated several sections and figures

    Uncertain Trees: Dealing with Uncertain Inputs in Regression Trees

    Full text link
    Tree-based ensemble methods, as Random Forests and Gradient Boosted Trees, have been successfully used for regression in many applications and research studies. Furthermore, these methods have been extended in order to deal with uncertainty in the output variable, using for example a quantile loss in Random Forests (Meinshausen, 2006). To the best of our knowledge, no extension has been provided yet for dealing with uncertainties in the input variables, even though such uncertainties are common in practical situations. We propose here such an extension by showing how standard regression trees optimizing a quadratic loss can be adapted and learned while taking into account the uncertainties in the inputs. By doing so, one no longer assumes that an observation lies into a single region of the regression tree, but rather that it belongs to each region with a certain probability. Experiments conducted on several data sets illustrate the good behavior of the proposed extension.Comment: 9 page

    Robust Classification for Imprecise Environments

    Get PDF
    In real-world environments it usually is difficult to specify target operating conditions precisely, for example, target misclassification costs. This uncertainty makes building robust classification systems problematic. We show that it is possible to build a hybrid classifier that will perform at least as well as the best available classifier for any target conditions. In some cases, the performance of the hybrid actually can surpass that of the best known classifier. This robust performance extends across a wide variety of comparison frameworks, including the optimization of metrics such as accuracy, expected cost, lift, precision, recall, and workforce utilization. The hybrid also is efficient to build, to store, and to update. The hybrid is based on a method for the comparison of classifier performance that is robust to imprecise class distributions and misclassification costs. The ROC convex hull (ROCCH) method combines techniques from ROC analysis, decision analysis and computational geometry, and adapts them to the particulars of analyzing learned classifiers. The method is efficient and incremental, minimizes the management of classifier performance data, and allows for clear visual comparisons and sensitivity analyses. Finally, we point to empirical evidence that a robust hybrid classifier indeed is needed for many real-world problems.Comment: 24 pages, 12 figures. To be published in Machine Learning Journal. For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH

    Gamma-Hadron Separation in Very-High-Energy gamma-ray astronomy using a multivariate analysis method

    Full text link
    In recent years, Imaging Atmospheric Cherenkov Telescopes (IACTs) have discovered a rich diversity of very high energy (VHE, > 100 GeV) gamma-ray emitters in the sky. These instruments image Cherenkov light emitted by gamma-ray induced particle cascades in the atmosphere. Background from the much more numerous cosmic-ray cascades is efficiently reduced by considering the shape of the shower images, and the capability to reduce this background is one of the key aspects that determine the sensitivity of a IACT. In this work we apply a tree classification method to data from the High Energy Stereoscopic System (H.E.S.S.). We show the stability of the method and its capabilities to yield an improved background reduction compared to the H.E.S.S. Standard Analysis.Comment: 10 pages, 9 figures, accepted for publication in Astroparticle Physic

    Parameter Tuning Using Gaussian Processes

    Get PDF
    Most machine learning algorithms require us to set up their parameter values before applying these algorithms to solve problems. Appropriate parameter settings will bring good performance while inappropriate parameter settings generally result in poor modelling. Hence, it is necessary to acquire the “best” parameter values for a particular algorithm before building the model. The “best” model not only reflects the “real” function and is well fitted to existing points, but also gives good performance when making predictions for new points with previously unseen values. A number of methods exist that have been proposed to optimize parameter values. The basic idea of all such methods is a trial-and-error process whereas the work presented in this thesis employs Gaussian process (GP) regression to optimize the parameter values of a given machine learning algorithm. In this thesis, we consider the optimization of only two-parameter learning algorithms. All the possible parameter values are specified in a 2-dimensional grid in this work. To avoid brute-force search, Gaussian Process Optimization (GPO) makes use of “expected improvement” to pick useful points rather than validating every point of the grid step by step. The point with the highest expected improvement is evaluated using cross-validation and the resulting data point is added to the training set for the Gaussian process model. This process is repeated until a stopping criterion is met. The final model is built using the learning algorithm based on the best parameter values identified in this process. In order to test the effectiveness of this optimization method on regression and classification problems, we use it to optimize parameters of some well-known machine learning algorithms, such as decision tree learning, support vector machines and boosting with trees. Through the analysis of experimental results obtained on datasets from the UCI repository, we find that the GPO algorithm yields competitive performance compared with a brute-force approach, while exhibiting a distinct advantage in terms of training time and number of cross-validation runs. Overall, the GPO method is a promising method for the optimization of parameter values in machine learning

    Forecasting creditworthiness in retail banking: a comparison of cascade correlation neural networks, CART and logistic regression scoring models

    Get PDF
    The preoccupation with modelling credit scoring systems including their relevance to forecasting and decision making in the financial sector has been with developed countries whilst developing countries have been largely neglected. The focus of our investigation is the Cameroonian commercial banking sector with implications for fellow members of the Banque des Etats de L’Afrique Centrale (BEAC) family which apply the same system. We investigate their currently used approaches to assessing personal loans and we construct appropriate scoring models. Three statistical modelling scoring techniques are applied, namely Logistic Regression (LR), Classification and Regression Tree (CART) and Cascade Correlation Neural Network (CCNN). To compare various scoring models’ performances we use Average Correct Classification (ACC) rates, error rates, ROC curve and GINI coefficient as evaluation criteria. The results demonstrate that a reduction in terms of forecasting power from 15.69% default cases under the current system, to 3.34% based on the best scoring model, namely CART can be achieved. The predictive capabilities of all three models are rated as at least very good using GINI coefficient; and rated excellent using the ROC curve for both CART and CCNN. It should be emphasised that in terms of prediction rate, CCNN is superior to the other techniques investigated in this paper. Also, a sensitivity analysis of the variables identifies borrower’s account functioning, previous occupation, guarantees, car ownership, and loan purpose as key variables in the forecasting and decision making process which are at the heart of overall credit policy
    corecore