2 research outputs found
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Interpretable machine learning addresses the black-box nature of deep neural
networks. Visual prototypes have been suggested for intrinsically interpretable
image recognition, instead of generating post-hoc explanations that approximate
a trained model. However, a large number of prototypes can be overwhelming. To
reduce explanation size and improve interpretability, we propose the Neural
Prototype Tree (ProtoTree), a deep learning method that includes prototypes in
an interpretable decision tree to faithfully visualize the entire model. In
addition to global interpretability, a path in the tree explains a single
prediction. Each node in our binary tree contains a trainable prototypical
part. The presence or absence of this prototype in an image determines the
routing through a node. Decision making is therefore similar to human
reasoning: Does the bird have a red throat? And an elongated beak? Then it's a
hummingbird! We tune the accuracy-interpretability trade-off using ensembling
and pruning. We apply pruning without sacrificing accuracy, resulting in a
small tree with only 8 prototypes along a path to classify a bird from 200
species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the
CUB-200-2011 and Stanford Cars data sets. Code is available at
https://github.com/M-Nauta/ProtoTreeComment: 11 pages, and 9 pages supplementar