3D CATBraTS: Channel Attention Transformer for Brain Tumour Semantic Segmentation

Abstract

Brain tumour diagnosis is a challenging task yet crucial for planning treatments to stop or slow the growth of a tumour. In the last decade, there has been a dramatic increase in the use of convolutional neural networks (CNN) for their high performance in the automatic segmentation of tumours in medical images. More recently, Vision Transformer (ViT) has become a central focus of medical imaging for its robustness and efficiency when compared to CNNs. In this paper, we propose a novel 3D transformer named 3D CATBraTS for brain tumour semantic segmentation on magnetic resonance images (MRIs) based on the state-of-the-art Swin transformer with a modified CNN-encoder architecture using residual blocks and a channel attention module. The proposed approach is evaluated on the BraTS 2021 dataset and achieved quantitative measures of the mean Dice similarity coefficient (DSC) that surpasses the current state-of-the-art approaches in the validation phase

    Similar works