4 research outputs found
Optimization of the Regression Ensemble Size
Ensemble learning algorithms such as bagging often generate unnecessarily large models, which consume extra computational resources and may degrade the generalization ability. Pruning can potentially reduce ensemble size as well as improve performance; however, researchers have previously focused more on pruning classifiers rather than regressors. This is because, in general, ensemble pruning is based on two metrics: diversity and accuracy. Many diversity metrics are known for problems dealing with a finite set of classes defined by discrete labels. Therefore, most of the work on ensemble pruning is focused on such problems: classification, clustering, and feature selection. For the regression problem, it is much more difficult to introduce a diversity metric. In fact, the only such metric known to date is a correlation matrix based on regressor predictions. This study seeks to address this gap. First, we introduce the mathematical condition that allows checking whether the regression ensemble includes redundant estimators, i.e., estimators, whose removal improves the ensemble performance. Developing this approach, we propose a new ambiguity-based pruning (AP) algorithm that bases on error-ambiguity decomposition formulated for a regression problem. To check the quality of AP, we compare it with the two methods that directly minimize the error by sequentially including and excluding regressors, as well as with the state-of-art Ordered Aggregation algorithm. Experimental studies confirm that the proposed approach allows reducing the size of the regression ensemble with simultaneous improvement in its performance and surpasses all compared methods
Trimming Stability Selection increases variable selection robustness
Contamination can severely distort an estimator unless the estimation
procedure is suitably robust. This is a well-known issue and has been addressed
in Robust Statistics, however, the relation of contamination and distorted
variable selection has been rarely considered in literature. As for variable
selection, many methods for sparse model selection have been proposed,
including the Stability Selection which is a meta-algorithm based on some
variable selection algorithm in order to immunize against particular data
configurations. We introduce the variable selection breakdown point that
quantifies the number of cases resp. cells that have to be contaminated in
order to let no relevant variable be detected. We show that particular outlier
configurations can completely mislead model selection and argue why even
cell-wise robust methods cannot fix this problem. We combine the variable
selection breakdown point with resampling, resulting in the Stability Selection
breakdown point that quantifies the robustness of Stability Selection. We
propose a trimmed Stability Selection which only aggregates the models with the
lowest in-sample losses so that, heuristically, models computed on heavily
contaminated resamples should be trimmed away. An extensive simulation study
with non-robust regression and classification algorithms as well as with Sparse
Least Trimmed Squares reveals both the potential of our approach to boost the
model selection robustness as well as the fragility of variable selection using
non-robust algorithms, even for an extremely small cell-wise contamination
rate