13 research outputs found

    PCTBagging: From inner ensembles to ensembles. A trade-off between discriminating capacity and interpretability

    Get PDF
    [EN] The use of decision trees considerably improves the discriminating capacity of ensemble classifiers. However, this process results in the classifiers no longer being interpretable, although comprehensibility is a desired trait of decision trees. Consolidation (consolidated tree construction algorithm, CTC) was introduced to improve the discriminating capacity of decision trees, whereby a set of samples is used to build the consolidated tree without sacrificing transparency. In this work, PCTBagging is presented as a hybrid approach between bagging and a consolidated tree such that part of the comprehensibility of the consolidated tree is maintained while also improving the discriminating capacity. The consolidated tree is first developed up to a certain point and then typical bagging is performed for each sample. The part of the consolidated tree to be initially developed is configured by setting a consolidation percentage. In this work, 11 different consolidation percentages are considered for PCTBagging to effectively analyse the trade-off between comprehensibility and discriminating capacity. The results of PCTBagging are compared to those of bagging, CTC and C4.5, which serves as the base for all other algorithms. PCTBagging, with a low consolidation percentage, achieves a discriminating capacity similar to that of bagging while maintaining part of the interpretable structure of the consolidated tree. PCTBagging with a consolidation percentage of 100% offers the same comprehensibility as CTC, but achieves a significantly greater discriminating capacity.This work was funded by the Department of Education, Universities and Research of the Basque Government (ADIAN, IT980-16); and by the Ministry of Economy and Competitiveness of the Spanish Government and the European Regional Development Fund -ERDF (PhysComp, TIN2017-85409-P). We would also like to thank our former undergraduate student Ander Otsoa de Alda, who participated in the implementation of the PCTBagging algorithm for the WEKA platform

    Machine Learning Classifiers Selection in Network Intrusion Detection

    Get PDF
    The objective of this work is to select machine learning classifiers for Network Intrusion Detection NIDS problems. The selection criterion is based upon the hyper-parameter variation, to evaluate and compare consistently the different models configuration. The models were trained and tested by crossvalidation sharing the same dataset partitions. The hyper-parameter search was performed in two ways, exhaustive and randomized upon the structure of the classifier to get feasible results. The performance result was tested for significance according to the frequentist and Bayesian significance test. The Bayesian posterior distribution was further analyzed to extract information in support of the classifiers comparison. The selection of a machine learning classifier is not trivial and it heavily depends on the dataset and the problem of interest. In this experiment seven classes of machine learning classifiers were initially analyzed, from which only three classes were selected to perform cross-validation to get the final selection, Decision Tree, Random Forest, and Multilayer Perceptron Classifiers. This article explores a systematic and rigorous approach to assess and select NIDS classifiers further than selecting the performance scores.Sociedad Argentina de Informática e Investigación Operativ

    Statistical Models for the Analysis of Optimization Algorithms with Benchmark Functions

    Full text link
    Frequentist statistical methods, such as hypothesis testing, are standard practice in papers that provide benchmark comparisons. Unfortunately, these methods have often been misused, e.g., without testing for their statistical test assumptions or without controlling for family-wise errors in multiple group comparisons, among several other problems. Bayesian Data Analysis (BDA) addresses many of the previously mentioned shortcomings but its use is not widely spread in the analysis of empirical data in the evolutionary computing community. This paper provides three main contributions. First, we motivate the need for utilizing Bayesian data analysis and provide an overview of this topic. Second, we discuss the practical aspects of BDA to ensure that our models are valid and the results transparent. Finally, we provide five statistical models that can be used to answer multiple research questions. The online appendix provides a step-by-step guide on how to perform the analysis of the models discussed in this paper, including the code for the statistical models, the data transformations and the discussed tables and figures.Comment: In submissio

    Constant optimization and feature standardization in multiobjective genetic programming

    Get PDF
    This paper extends the numerical tuning of tree constants in genetic programming (GP) to the multiobjective domain. Using ten real-world benchmark regression datasets and employing Bayesian comparison procedures, we first consider the effects of feature standardization (without constant tuning) and conclude that standardization generally produces lower test errors, but, contrary to other recently published work, we find or{blue}{a much less clear trend for} tree sizes. In addition, we consider the effects of constant tuning -- with and without feature standardization -- and observe that i) constant tuning invariably improves test error, and ii) usually decreases tree size. Combined with standardization, constant tuning produces the best test error results; tree sizes, however, are increased. We also examine the effects of applying constant tuning only once at the end a conventional GP run which turns out to be surprisingly promising. Finally, we consider the merits of using numerical procedures to tune tree constants and observe that for around half the datasets evolutionary search alone is superior whereas for the remaining half, parameter tuning is superior. We identify a number of open research questions that arise from this work

    A Comparative Assessment of Machine-Learning Techniques for Forest Degradation Caused by Selective Logging in an Amazon Region Using Multitemporal X-Band SAR Images.

    Get PDF
    Abstract: The near-real-time detection of selective logging in tropical forests is essential to support actions for reducing CO2 emissions and for monitoring timber extraction from forest concessions in tropical regions. Current operating systems rely on optical data that are constrained by persistent cloud-cover conditions in tropical regions. Synthetic aperture radar data represent an alternative to this technical constraint. This study aimed to evaluate the performance of three machine learning algorithms applied to multitemporal pairs of COSMO-SkyMed images to detect timber exploitation in a forest concession located in the Jamari National Forest, Rondônia State, Brazilian Amazon. The studied algorithms included random forest (RF), AdaBoost (AB), and multilayer perceptron artificial neural network (MLP-ANN). The geographical coordinates (latitude and longitude) of logged trees and the LiDAR point clouds before and after selective logging were used as ground truths. The best results were obtained when the MLP-ANN was applied with 50 neurons in the hidden layer, using the ReLu activation function and SGD weight optimizer, presenting 88% accuracy both for the pair of images used for training (images acquired in June and October) of the network and in the generalization test, applied on a second dataset (images acquired in January and June). This study showed that X-band SAR images processed by applying machine learning techniques can be accurately used for detecting selective logging activities in the Brazilian Amazon

    A Comparative Assessment of Machine-Learning Techniques for Forest Degradation Caused by Selective Logging in an Amazon Region Using Multitemporal X-Band SAR Images

    Get PDF
    From MDPI via Jisc Publications RouterHistory: accepted 2021-08-19, pub-electronic 2021-08-24Publication status: PublishedThe near-real-time detection of selective logging in tropical forests is essential to support actions for reducing CO2 emissions and for monitoring timber extraction from forest concessions in tropical regions. Current operating systems rely on optical data that are constrained by persistent cloud-cover conditions in tropical regions. Synthetic aperture radar data represent an alternative to this technical constraint. This study aimed to evaluate the performance of three machine learning algorithms applied to multitemporal pairs of COSMO-SkyMed images to detect timber exploitation in a forest concession located in the Jamari National Forest, Rondônia State, Brazilian Amazon. The studied algorithms included random forest (RF), AdaBoost (AB), and multilayer perceptron artificial neural network (MLP-ANN). The geographical coordinates (latitude and longitude) of logged trees and the LiDAR point clouds before and after selective logging were used as ground truths. The best results were obtained when the MLP-ANN was applied with 50 neurons in the hidden layer, using the ReLu activation function and SGD weight optimizer, presenting 88% accuracy both for the pair of images used for training (images acquired in June and October) of the network and in the generalization test, applied on a second dataset (images acquired in January and June). This study showed that X-band SAR images processed by applying machine learning techniques can be accurately used for detecting selective logging activities in the Brazilian Amazon

    A Bayesian approach for comparing cross-validated algorithms on multiple data sets

    No full text
    We present a Bayesian approach for making statistical inference about the accuracy (or any other score) of two competing algorithms which have been assessed via cross-validation on multiple data sets. The approach is constituted by two pieces. The first is a novel correlated Bayesian tt t test for the analysis of the cross-validation results on a single data set which accounts for the correlation due to the overlapping training sets. The second piece merges the posterior probabilities computed by the Bayesian correlated tt t test on the different data sets to make inference on multiple data sets. It does so by adopting a Poisson-binomial model. The inferences on multiple data sets account for the different uncertainty of the cross-validation results on the different data sets. It is the first test able to achieve this goal. It is generally more powerful than the signed-rank test if ten runs of cross-validation are performed, as it is anyway generally recommended
    corecore