36,885 research outputs found
Diverse Rule Sets
While machine-learning models are flourishing and transforming many aspects
of everyday life, the inability of humans to understand complex models poses
difficulties for these models to be fully trusted and embraced. Thus,
interpretability of models has been recognized as an equally important quality
as their predictive power. In particular, rule-based systems are experiencing a
renaissance owing to their intuitive if-then representation.
However, simply being rule-based does not ensure interpretability. For
example, overlapped rules spawn ambiguity and hinder interpretation. Here we
propose a novel approach of inferring diverse rule sets, by optimizing small
overlap among decision rules with a 2-approximation guarantee under the
framework of Max-Sum diversification. We formulate the problem as maximizing a
weighted sum of discriminative quality and diversity of a rule set.
In order to overcome an exponential-size search space of association rules,
we investigate several natural options for a small candidate set of
high-quality rules, including frequent and accurate rules, and examine their
hardness. Leveraging the special structure in our formulation, we then devise
an efficient randomized algorithm, which samples rules that are highly
discriminative and have small overlap. The proposed sampling algorithm
analytically targets a distribution of rules that is tailored to our objective.
We demonstrate the superior predictive power and interpretability of our
model with a comprehensive empirical study against strong baselines
Interpretable multiclass classification by MDL-based rule lists
Interpretable classifiers have recently witnessed an increase in attention
from the data mining community because they are inherently easier to understand
and explain than their more complex counterparts. Examples of interpretable
classification models include decision trees, rule sets, and rule lists.
Learning such models often involves optimizing hyperparameters, which typically
requires substantial amounts of data and may result in relatively large models.
In this paper, we consider the problem of learning compact yet accurate
probabilistic rule lists for multiclass classification. Specifically, we
propose a novel formalization based on probabilistic rule lists and the minimum
description length (MDL) principle. This results in virtually parameter-free
model selection that naturally allows to trade-off model complexity with
goodness of fit, by which overfitting and the need for hyperparameter tuning
are effectively avoided. Finally, we introduce the Classy algorithm, which
greedily finds rule lists according to the proposed criterion. We empirically
demonstrate that Classy selects small probabilistic rule lists that outperform
state-of-the-art classifiers when it comes to the combination of predictive
performance and interpretability. We show that Classy is insensitive to its
only parameter, i.e., the candidate set, and that compression on the training
set correlates with classification performance, validating our MDL-based
selection criterion
- …