As information sources are usually imperfect, it is necessary to take into
account their reliability in multi-source information fusion tasks. In this
paper, we propose a new deep framework allowing us to merge multi-MR image
segmentation results using the formalism of Dempster-Shafer theory while taking
into account the reliability of different modalities relative to different
classes. The framework is composed of an encoder-decoder feature extraction
module, an evidential segmentation module that computes a belief function at
each voxel for each modality, and a multi-modality evidence fusion module,
which assigns a vector of discount rates to each modality evidence and combines
the discounted evidence using Dempster's rule. The whole framework is trained
by minimizing a new loss function based on a discounted Dice index to increase
segmentation accuracy and reliability. The method was evaluated on the BraTs
2021 database of 1251 patients with brain tumors. Quantitative and qualitative
results show that our method outperforms the state of the art, and implements
an effective new idea for merging multi-information within deep neural
networks.Comment: MICCAI202