Decision Trees are some of the most popular machine learning models today due
to their out-of-the-box performance and interpretability. Often, Decision Trees
models are constructed greedily in a top-down fashion via heuristic search
criteria, such as Gini impurity or entropy. However, trees constructed in this
manner are sensitive to minor fluctuations in training data and are prone to
overfitting. In contrast, Bayesian approaches to tree construction formulate
the selection process as a posterior inference problem; such approaches are
more stable and provide greater theoretical guarantees. However, generating
Bayesian Decision Trees usually requires sampling from complex, multimodal
posterior distributions. Current Markov Chain Monte Carlo-based approaches for
sampling Bayesian Decision Trees are prone to mode collapse and long mixing
times, which makes them impractical. In this paper, we propose a new criterion
for training Bayesian Decision Trees. Our criterion gives rise to BCART-PCFG,
which can efficiently sample decision trees from a posterior distribution
across trees given the data and find the maximum a posteriori (MAP) tree.
Learning the posterior and training the sampler can be done in time that is
polynomial in the dataset size. Once the posterior has been learned, trees can
be sampled efficiently (linearly in the number of nodes). At the core of our
method is a reduction of sampling the posterior to sampling a derivation from a
probabilistic context-free grammar. We find that trees sampled via BCART-PCFG
perform comparable to or better than greedily-constructed Decision Trees in
classification accuracy on several datasets. Additionally, the trees sampled
via BCART-PCFG are significantly smaller -- sometimes by as much as 20x.Comment: 10 pages, 1 figur