168 research outputs found
Overview of convolutional neural networks architectures for brain tumor segmentation
Due to the paramount importance of the medical field in the lives of people, researchers and experts exploited advancements in computer techniques to solve many diagnostic and analytical medical problems. Brain tumor diagnosis is one of the most important computational problems that has been studied and focused on. The brain tumor is determined by segmentation of brain images using many techniques based on magnetic resonance imaging (MRI). Brain tumor segmentation methods have been developed since a long time and are still evolving, but the current trend is to use deep convolutional neural networks (CNNs) due to its many breakthroughs and unprecedented results that have been achieved in various applications and their capacity to learn a hierarchy of progressively complicated characteristics from input without requiring manual feature extraction. Considering these unprecedented results, we present this paper as a brief review for main CNNs architecture types used in brain tumor segmentation. Specifically, we focus on researcher works that used the well-known brain tumor segmentation (BraTS) dataset
TuNet: End-to-end Hierarchical Brain Tumor Segmentation using Cascaded Networks
Glioma is one of the most common types of brain tumors; it arises in the
glial cells in the human brain and in the spinal cord. In addition to having a
high mortality rate, glioma treatment is also very expensive. Hence, automatic
and accurate segmentation and measurement from the early stages are critical in
order to prolong the survival rates of the patients and to reduce the costs of
the treatment. In the present work, we propose a novel end-to-end cascaded
network for semantic segmentation that utilizes the hierarchical structure of
the tumor sub-regions with ResNet-like blocks and Squeeze-and-Excitation
modules after each convolution and concatenation block. By utilizing
cross-validation, an average ensemble technique, and a simple post-processing
technique, we obtained dice scores of 88.06, 80.84, and 80.29, and Hausdorff
Distances (95th percentile) of 6.10, 5.17, and 2.21 for the whole tumor, tumor
core, and enhancing tumor, respectively, on the online test set.Comment: Accepted at MICCAI BrainLes 201
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
Attention Mechanisms in Medical Image Segmentation: A Survey
Medical image segmentation plays an important role in computer-aided
diagnosis. Attention mechanisms that distinguish important parts from
irrelevant parts have been widely used in medical image segmentation tasks.
This paper systematically reviews the basic principles of attention mechanisms
and their applications in medical image segmentation. First, we review the
basic concepts of attention mechanism and formulation. Second, we surveyed over
300 articles related to medical image segmentation, and divided them into two
groups based on their attention mechanisms, non-Transformer attention and
Transformer attention. In each group, we deeply analyze the attention
mechanisms from three aspects based on the current literature work, i.e., the
principle of the mechanism (what to use), implementation methods (how to use),
and application tasks (where to use). We also thoroughly analyzed the
advantages and limitations of their applications to different tasks. Finally,
we summarize the current state of research and shortcomings in the field, and
discuss the potential challenges in the future, including task specificity,
robustness, standard evaluation, etc. We hope that this review can showcase the
overall research context of traditional and Transformer attention methods,
provide a clear reference for subsequent research, and inspire more advanced
attention research, not only in medical image segmentation, but also in other
image analysis scenarios.Comment: Submitted to Medical Image Analysis, survey paper, 34 pages, over 300
reference
Acute ischemic stroke lesion segmentation in non-contrast CT images using 3D convolutional neural networks
In this paper, an automatic algorithm aimed at volumetric segmentation of
acute ischemic stroke lesion in non-contrast computed tomography brain 3D
images is proposed. Our deep-learning approach is based on the popular 3D U-Net
convolutional neural network architecture, which was modified by adding the
squeeze-and-excitation blocks and residual connections. Robust pre-processing
methods were implemented to improve the segmentation accuracy. Moreover, a
specific patches sampling strategy was used to address the large size of
medical images, to smooth out the effect of the class imbalance problem and to
stabilize neural network training. All experiments were performed using
five-fold cross-validation on the dataset containing non-contrast computed
tomography volumetric brain scans of 81 patients diagnosed with acute ischemic
stroke. Two radiology experts manually segmented images independently and then
verified the labeling results for inconsistencies. The quantitative results of
the proposed algorithm and obtained segmentation were measured by the Dice
similarity coefficient, sensitivity, specificity and precision metrics. Our
proposed model achieves an average Dice of , sensitivity of
, specificity of and precision of
, showing promising segmentation results.Comment: 18 pages, 4 figures, 2 table
3D CATBraTS: Channel Attention Transformer for Brain Tumour Semantic Segmentation
Brain tumour diagnosis is a challenging task yet crucial for planning treatments to stop or slow the growth of a tumour. In the last decade, there has been a dramatic increase in the use of convolutional neural networks (CNN) for their high performance in the automatic segmentation of tumours in medical images. More recently, Vision Transformer (ViT) has become a central focus of medical imaging for its robustness and efficiency when compared to CNNs. In this paper, we propose a novel 3D transformer named 3D CATBraTS for brain tumour semantic segmentation on magnetic resonance images (MRIs) based on the state-of-the-art Swin transformer with a modified CNN-encoder architecture using residual blocks and a channel attention module. The proposed approach is evaluated on the BraTS 2021 dataset and achieved quantitative measures of the mean Dice similarity coefficient (DSC) that surpasses the current state-of-the-art approaches in the validation phase
Diagnosis and Prognosis of Head and Neck Cancer Patients using Artificial Intelligence
Cancer is one of the most life-threatening diseases worldwide, and head and
neck (H&N) cancer is a prevalent type with hundreds of thousands of new cases
recorded each year. Clinicians use medical imaging modalities such as computed
tomography and positron emission tomography to detect the presence of a tumor,
and they combine that information with clinical data for patient prognosis. The
process is mostly challenging and time-consuming. Machine learning and deep
learning can automate these tasks to help clinicians with highly promising
results. This work studies two approaches for H&N tumor segmentation: (i)
exploration and comparison of vision transformer (ViT)-based and convolutional
neural network-based models; and (ii) proposal of a novel 2D perspective to
working with 3D data. Furthermore, this work proposes two new architectures for
the prognosis task. An ensemble of several models predicts patient outcomes
(which won the HECKTOR 2021 challenge prognosis task), and a ViT-based
framework concurrently performs patient outcome prediction and tumor
segmentation, which outperforms the ensemble model.Comment: This is Masters thesis work submitted to MBZUA
- …