Tree Ensemble (TE) models (e.g. Gradient Boosted Trees and Random Forests)
often provide higher prediction performance compared to single decision trees.
However, TE models generally lack transparency and interpretability, as humans
have difficulty understanding their decision logic. This paper presents a novel
approach to convert a TE trained for a binary classification task, to a rule
list (RL) that is a global equivalent to the TE and is comprehensible for a
human. This RL captures all necessary and sufficient conditions for decision
making by the TE. Experiments on benchmark datasets demonstrate that, compared
to state-of-the-art methods, (i) predictions from the RL generated by TE2Rules
have high fidelity with respect to the original TE, (ii) the RL from TE2Rules
has high interpretability measured by the number and the length of the decision
rules, (iii) the run-time of TE2Rules algorithm can be reduced significantly at
the cost of a slightly lower fidelity, and (iv) the RL is a fast alternative to
the state-of-the-art rule-based instance-level outcome explanation techniques