Recent applications of deep convolutional neural networks in medical imaging
raise concerns about their interpretability. While most explainable deep
learning applications use post hoc methods (such as GradCAM) to generate
feature attribution maps, there is a new type of case-based reasoning models,
namely ProtoPNet and its variants, which identify prototypes during training
and compare input image patches with those prototypes. We propose the first
medical prototype network (MProtoNet) to extend ProtoPNet to brain tumor
classification with 3D multi-parametric magnetic resonance imaging (mpMRI)
data. To address different requirements between 2D natural images and 3D mpMRIs
especially in terms of localizing attention regions, a new attention module
with soft masking and online-CAM loss is introduced. Soft masking helps sharpen
attention maps, while online-CAM loss directly utilizes image-level labels when
training the attention module. MProtoNet achieves statistically significant
improvements in interpretability metrics of both correctness and localization
coherence (with a best activation precision of 0.713±0.058) without
human-annotated labels during training, when compared with GradCAM and several
ProtoPNet variants. The source code is available at
https://github.com/aywi/mprotonet.Comment: 15 pages, 5 figures, 1 table; accepted for oral presentation at MIDL
2023 (https://openreview.net/forum?id=6Wbj3QCo4U4); camera-ready versio