59 research outputs found
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
Brain Tumor Detection and Segmentation in Multisequence MRI
Tato prĂĄce se zabĂœvĂĄ detekcĂ a segmentacĂ mozkovĂ©ho nĂĄdoru v multisekvenÄnĂch MR obrazech se zamÄĆenĂm na gliomy vysokĂ©ho a nĂzkĂ©ho stupnÄ malignity. Jsou zde pro tento ĂșÄel navrĆŸeny tĆi metody. PrvnĂ metoda se zabĂœvĂĄ detekcĂ prezence ÄĂĄstĂ mozkovĂ©ho nĂĄdoru v axiĂĄlnĂch a koronĂĄrnĂch Ćezech. JednĂĄ se o algoritmus zaloĆŸenĂœ na analĂœze symetrie pĆi rĆŻznĂœch rozliĆĄenĂch obrazu, kterĂœ byl otestovĂĄn na T1, T2, T1C a FLAIR obrazech. DruhĂĄ metoda se zabĂœvĂĄ extrakcĂ oblasti celĂ©ho mozkovĂ©ho nĂĄdoru, zahrnujĂcĂ oblast jĂĄdra tumoru a edĂ©mu, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkovĂœ nĂĄdor z 2D i 3D obrazĆŻ. Je zde opÄt vyuĆŸita analĂœza symetrie, kterĂĄ je nĂĄsledovĂĄna automatickĂœm stanovenĂm intenzitnĂho prahu z nejvĂce asymetrickĂœch ÄĂĄstĂ. TĆetĂ metoda je zaloĆŸena na predikci lokĂĄlnĂ struktury a je schopna segmentovat celou oblast nĂĄdoru, jeho jĂĄdro i jeho aktivnĂ ÄĂĄst. Metoda vyuĆŸĂvĂĄ faktu, ĆŸe vÄtĆĄina lĂ©kaĆskĂœch obrazĆŻ vykazuje vysokou podobnost intenzit sousednĂch pixelĆŻ a silnou korelaci mezi intenzitami v rĆŻznĂœch obrazovĂœch modalitĂĄch. JednĂm ze zpĆŻsobĆŻ, jak s touto korelacĂ pracovat a pouĆŸĂvat ji, je vyuĆŸitĂ lokĂĄlnĂch obrazovĂœch polĂ. PodobnĂĄ korelace existuje takĂ© mezi sousednĂmi pixely v anotaci obrazu. Tento pĆĂznak byl vyuĆŸit v predikci lokĂĄlnĂ struktury pĆi lokĂĄlnĂ anotaci polĂ. Jako klasifikaÄnĂ algoritmus je v tĂ©to metodÄ pouĆŸita konvoluÄnĂ neuronovĂĄ sĂĆ„ vzhledem k jejĂ znĂĄme schopnosti zachĂĄzet s korelacĂ mezi pĆĂznaky. VĆĄechny tĆi metody byly otestovĂĄny na veĆejnĂ© databĂĄzi 254 multisekvenÄnĂch MR obrazech a byla dosĂĄhnuta pĆesnost srovnatelnĂĄ s nejmodernÄjĆĄĂmi metodami v mnohem kratĆĄĂm vĂœpoÄetnĂm Äase (v ĆĂĄdu sekund pĆi pouĆŸitĂœ CPU), coĆŸ poskytuje moĆŸnost manuĂĄlnĂch Ășprav pĆi interaktivnĂ segmetaci.This work deals with the brain tumor detection and segmentation in multisequence MR images with particular focus on high- and low-grade gliomas. Three methods are propose for this purpose. The first method deals with the presence detection of brain tumor structures in axial and coronal slices. This method is based on multi-resolution symmetry analysis and it was tested for T1, T2, T1C and FLAIR images. The second method deals with extraction of the whole brain tumor region, including tumor core and edema, in FLAIR and T2 images and is suitable to extract the whole brain tumor region from both 2D and 3D. It also uses the symmetry analysis approach which is followed by automatic determination of the intensity threshold from the most asymmetric parts. The third method is based on local structure prediction and it is able to segment the whole tumor region as well as tumor core and active tumor. This method takes the advantage of a fact that most medical images feature a high similarity in intensities of nearby pixels and a strong correlation of intensity profiles across different image modalities. One way of dealing with -- and even exploiting -- this correlation is the use of local image patches. In the same way, there is a high correlation between nearby labels in image annotation, a feature that has been used in the ``local structure prediction'' of local label patches. Convolutional neural network is chosen as a learning algorithm, as it is known to be suited for dealing with correlation between features. All three methods were evaluated on a public data set of 254 multisequence MR volumes being able to reach comparable results to state-of-the-art methods in much shorter computing time (order of seconds running on CPU) providing means, for example, to do online updates when aiming at an interactive segmentation.
Deep Learning with Mixed Supervision for Brain Tumor Segmentation
International audienceMost of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained on manually segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. In this paper, we propose to use both types of training data (fully-annotated and weakly-annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks in order to exploit information contained in weakly-annotated images while preventing the network to learn features which are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor seg-mentation in Magnetic Resonance images from BRATS 2018 challenge. We show that the proposed approach provides a significant improvement of seg-mentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly-annotated and fully-annotated images available for training
Automated brain tumour identification using magnetic resonance imaging:a systematic review and meta-analysis
BACKGROUND: Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. METHODS: A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. RESULTS: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had âgoodâ (DSC â„ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. CONCLUSIONS: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models
Brain Tumor Classification using SLIC Segmentation with Superpixel Fusion, GoogleNet, and Linear Neighborhood Semantic Segmentation
Brain tumor is an abnormal tissue mass resultant of uncontrolled growth of cells. Brain tumors often reduce life expectancy and cause death in the later stages. Automatic detection of brain tumors is a challenging and important task in computer-aided disease diagnosis systems. This paper presents a deep learning-based approach to the classification of brain tumors. The noise in the brain MRI image is removed using Edge Directional Total Variation Denoising. The brain MRI image is segmented using SLIC segmentation with superpixel fusion. The segments are given to a trained GoogleNet model, which identifies the tumor parts in the image. Once the tumor is identified, a Convolution Neural Network (CNN) based modified semantic segmentation model is used to classify the pixels along the edges of the tumor segments. The modified sematic segmentation uses a linear neighborhood of the pixel for better classification. The final tumor identified is accurate as pixels at the border are classified precisely. The experimental results show that the proposed method has produced an accuracy of 97.3% with GoogleNet classification model, and the linear neighborhood semantic segmentation has delivered an accuracy of 98%
Brain extraction on MRI scans in presence of diffuse glioma: Multi-institutional performance evaluation of deep learning methods and robust modality-agnostic training
Brain extraction, or skull-stripping, is an essential pre-processing step in neuro-imaging that has a direct impact on the quality of all subsequent processing and analyses steps. It is also a key requirement in multi-institutional collaborations to comply with privacy-preserving regulations. Existing automated methods, including Deep Learning (DL) based methods that have obtained state-of-the-art results in recent years, have primarily targeted brain extraction without considering pathologically-affected brains. Accordingly, they perform sub-optimally when applied on magnetic resonance imaging (MRI) brain scans with apparent pathologies such as brain tumors. Furthermore, existing methods focus on using only T1-weighted MRI scans, even though multi-parametric MRI (mpMRI) scans are routinely acquired for patients with suspected brain tumors. In this study, we present a comprehensive performance evaluation of recent deep learning architectures for brain extraction, training models on mpMRI scans of pathologically-affected brains, with a particular focus on seeking a practically-applicable, low computational footprint approach, generalizable across multiple institutions, further facilitating collaborations. We identified a large retrospective multi-institutional dataset of n=3340 mpMRI brain tumor scans, with manually-inspected and approved gold-standard segmentations, acquired during standard clinical practice under varying acquisition protocols, both from private institutional data and public (TCIA) collections. To facilitate optimal utilization of rich mpMRI data, we further introduce and evaluate a novel ââmodality-agnostic trainingââ technique that can be applied using any available modality, without need for model retraining. Our results indicate that the modality-agnostic approach1 obtains accurate results, providing a generic and practical tool for brain extraction on scans with brain tumors
Is attention all you need in medical image analysis? A review
Medical imaging is a key component in clinical diagnosis, treatment planning
and clinical trial design, accounting for almost 90% of all healthcare data.
CNNs achieved performance gains in medical image analysis (MIA) over the last
years. CNNs can efficiently model local pixel interactions and be trained on
small-scale MI data. The main disadvantage of typical CNN models is that they
ignore global pixel relationships within images, which limits their
generalisation ability to understand out-of-distribution data with different
'global' information. The recent progress of Artificial Intelligence gave rise
to Transformers, which can learn global relationships from data. However, full
Transformer models need to be trained on large-scale data and involve
tremendous computational complexity. Attention and Transformer compartments
(Transf/Attention) which can well maintain properties for modelling global
relationships, have been proposed as lighter alternatives of full Transformers.
Recently, there is an increasing trend to co-pollinate complementary
local-global properties from CNN and Transf/Attention architectures, which led
to a new era of hybrid models. The past years have witnessed substantial growth
in hybrid CNN-Transf/Attention models across diverse MIA problems. In this
systematic review, we survey existing hybrid CNN-Transf/Attention models,
review and unravel key architectural designs, analyse breakthroughs, and
evaluate current and future opportunities as well as challenges. We also
introduced a comprehensive analysis framework on generalisation opportunities
of scientific and clinical impact, based on which new data-driven domain
generalisation and adaptation methods can be stimulated
Investigating the role of machine learning and deep learning techniques in medical image segmentation
openThis work originates from the growing interest of the medical imaging community in the application of
machine learning techniques and, from deep learning to improve the accuracy of cancerscreening. The thesis
is structured into two different tasks.
In the first part, magnetic resonance images were analysed in order to support clinical experts in the
treatment of patients with brain tumour metastases (BM). The main topic related to this study was to
investigate whether BM segmentation may be approached successfully by two supervised ML classifiers
belonging to feature-based and deep learning approaches, respectively. SVM and V-Net Convolutional Neural
Network model are selected from the literature as representative of the two approaches.
The second task related to this thesisis illustrated the development of a deep learning study aimed to process
and classify lesions in mammograms with the use of slender neural networks. Mammography has a central
role in screening and diagnosis of breast lesions. Deep Convolutional Neural Networks have shown a great
potentiality to address the issue of early detection of breast cancer with an acceptable level of accuracy and
reproducibility. A traditional convolution network was compared with a novel one obtained making use of
much more efficient depth wise separable convolution layers.
As a final goal to integrate the system developed in clinical practice, for both fields studied, all the Medical
Imaging and Pattern Recognition algorithmic solutions have been integrated into a MATLABÂź software
packageopenInformatica e matematica del calcologonella gloriaGonella, Glori
A Review on Computer Aided Diagnosis of Acute Brain Stroke.
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., 'ischemic penumbra') can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas
Radiomics and Deep Learning in Brain Metastases: Current Trends and Roadmap to Future Applications
Advances in radiomics and deep learning (DL) hold great potential to be at the forefront of precision medicine for the treatment of patients with brain metastases. Radiomics and DL can aid clinical decision-making by enabling accurate diagnosis, facilitating the identification of molecular markers, providing accurate prognoses, and monitoring treatment response. In this review, we summarize the clinical background, unmet needs, and current state of research of radiomics and DL for the treatment of brain metastases. The promises, pitfalls, and future roadmap of radiomics and DL in brain metastases are addressed as well.ope
- âŠ