When Does Subagging Work?

Abstract

We study the effectiveness of subagging, or subsample aggregating, on regression trees, apopular non-parametric method in machine learning. First, we give sufficient conditionsfor pointwise consistency of trees. We formalize that (i) the bias depends on the diameterof cells, hence trees with few splits tend to be biased, and (ii) the variance depends on thenumber of observations in cells, hence trees with many splits tend to have large variance.While these statements for bias and variance are known to hold globally in the covariatespace, we show that, under some constraints, they are also true locally. Second, we comparethe performance of subagging to that of trees across different numbers of splits. We findthat (1) for any given number of splits, subagging improves upon a single tree, and (2)this improvement is larger for many splits than it is for few splits. However, (3) a singletree grown at optimal size can outperform subagging if the size of its individual treesis not optimally chosen. This last result goes against common practice of growing largerandomized trees to eliminate bias and then averaging to reduce variance

    Similar works