880 research outputs found

    A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy

    Get PDF
    The difusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent difusion coefcient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in fnal autosegmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume diference was 8.69% (±5.62%); the mean Diceñ€ℱs similarity coefcient (DSC) was 0.88 (±0.02); the mean sensitivity and specifcity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efciency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target defnition in precision radiation treatment planning for patients with gliomas

    Brain Tumor Detection and Segmentation in Multisequence MRI

    Get PDF
    Tato prĂĄce se zabĂœvĂĄ detekcĂ­ a segmentacĂ­ mozkovĂ©ho nĂĄdoru v multisekvenčnĂ­ch MR obrazech se zaměƙenĂ­m na gliomy vysokĂ©ho a nĂ­zkĂ©ho stupně malignity. Jsou zde pro tento Ășčel navrĆŸeny tƙi metody. PrvnĂ­ metoda se zabĂœvĂĄ detekcĂ­ prezence částĂ­ mozkovĂ©ho nĂĄdoru v axiĂĄlnĂ­ch a koronĂĄrnĂ­ch ƙezech. JednĂĄ se o algoritmus zaloĆŸenĂœ na analĂœze symetrie pƙi rĆŻznĂœch rozliĆĄenĂ­ch obrazu, kterĂœ byl otestovĂĄn na T1, T2, T1C a FLAIR obrazech. DruhĂĄ metoda se zabĂœvĂĄ extrakcĂ­ oblasti celĂ©ho mozkovĂ©ho nĂĄdoru, zahrnujĂ­cĂ­ oblast jĂĄdra tumoru a edĂ©mu, ve FLAIR a T2 obrazech. Metoda je schopna extrahovat mozkovĂœ nĂĄdor z 2D i 3D obrazĆŻ. Je zde opět vyuĆŸita analĂœza symetrie, kterĂĄ je nĂĄsledovĂĄna automatickĂœm stanovenĂ­m intenzitnĂ­ho prahu z nejvĂ­ce asymetrickĂœch částĂ­. TƙetĂ­ metoda je zaloĆŸena na predikci lokĂĄlnĂ­ struktury a je schopna segmentovat celou oblast nĂĄdoru, jeho jĂĄdro i jeho aktivnĂ­ část. Metoda vyuĆŸĂ­vĂĄ faktu, ĆŸe větĆĄina lĂ©kaƙskĂœch obrazĆŻ vykazuje vysokou podobnost intenzit sousednĂ­ch pixelĆŻ a silnou korelaci mezi intenzitami v rĆŻznĂœch obrazovĂœch modalitĂĄch. JednĂ­m ze zpĆŻsobĆŻ, jak s touto korelacĂ­ pracovat a pouĆŸĂ­vat ji, je vyuĆŸitĂ­ lokĂĄlnĂ­ch obrazovĂœch polĂ­. PodobnĂĄ korelace existuje takĂ© mezi sousednĂ­mi pixely v anotaci obrazu. Tento pƙíznak byl vyuĆŸit v predikci lokĂĄlnĂ­ struktury pƙi lokĂĄlnĂ­ anotaci polĂ­. Jako klasifikačnĂ­ algoritmus je v tĂ©to metodě pouĆŸita konvolučnĂ­ neuronovĂĄ sĂ­Ć„ vzhledem k jejĂ­ znĂĄme schopnosti zachĂĄzet s korelacĂ­ mezi pƙíznaky. VĆĄechny tƙi metody byly otestovĂĄny na veƙejnĂ© databĂĄzi 254 multisekvenčnĂ­ch MR obrazech a byla dosĂĄhnuta pƙesnost srovnatelnĂĄ s nejmodernějĆĄĂ­mi metodami v mnohem kratĆĄĂ­m vĂœpočetnĂ­m čase (v ƙádu sekund pƙi pouĆŸitĂœ CPU), coĆŸ poskytuje moĆŸnost manuĂĄlnĂ­ch Ășprav pƙi interaktivnĂ­ segmetaci.This work deals with the brain tumor detection and segmentation in multisequence MR images with particular focus on high- and low-grade gliomas. Three methods are propose for this purpose. The first method deals with the presence detection of brain tumor structures in axial and coronal slices. This method is based on multi-resolution symmetry analysis and it was tested for T1, T2, T1C and FLAIR images. The second method deals with extraction of the whole brain tumor region, including tumor core and edema, in FLAIR and T2 images and is suitable to extract the whole brain tumor region from both 2D and 3D. It also uses the symmetry analysis approach which is followed by automatic determination of the intensity threshold from the most asymmetric parts. The third method is based on local structure prediction and it is able to segment the whole tumor region as well as tumor core and active tumor. This method takes the advantage of a fact that most medical images feature a high similarity in intensities of nearby pixels and a strong correlation of intensity profiles across different image modalities. One way of dealing with -- and even exploiting -- this correlation is the use of local image patches. In the same way, there is a high correlation between nearby labels in image annotation, a feature that has been used in the ``local structure prediction'' of local label patches. Convolutional neural network is chosen as a learning algorithm, as it is known to be suited for dealing with correlation between features. All three methods were evaluated on a public data set of 254 multisequence MR volumes being able to reach comparable results to state-of-the-art methods in much shorter computing time (order of seconds running on CPU) providing means, for example, to do online updates when aiming at an interactive segmentation.

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    AI-based glioma grading for a trustworthy diagnosis: an analytical pipeline for improved reliability

    Get PDF
    Glioma is the most common type of tumor in humans originating in the brain. According to the World Health Organization, gliomas can be graded on a four-stage scale, ranging from the most benign to the most malignant. The grading of these tumors from image information is a far from trivial task for radiologists and one in which they could be assisted by machine-learning-based decision support. However, the machine learning analytical pipeline is also fraught with perils stemming from different sources, such as inadvertent data leakage, adequacy of 2D image sampling, or classifier assessment biases. In this paper, we analyze a glioma database sourced from multiple datasets using a simple classifier, aiming to obtain a reliable tumor grading and, on the way, we provide a few guidelines to ensure such reliability. Our results reveal that by focusing on the tumor region of interest and using data augmentation techniques we significantly enhanced the accuracy and confidence in tumor classifications. Evaluation on an independent test set resulted in an AUC-ROC of 0.932 in the discrimination of low-grade gliomas from high-grade gliomas, and an AUC-ROC of 0.893 in the classification of grades 2, 3, and 4. The study also highlights the importance of providing, beyond generic classification performance, measures of how reliable and trustworthy the model’s output is, thus assessing the model’s certainty and robustness.Carla Pitarch is a fellow of Eurecat’s “Vicente López” PhD grant program.Peer ReviewedPostprint (published version

    Deep learning applications in neuro-oncology

    Get PDF
    Deep learning (DL) is a relatively newer subdomain of machine learning (ML) with incredible potential for certain applications in the medical field. Given recent advances in its use in neuro-oncology, its role in diagnosing, prognosticating, and managing the care of cancer patients has been the subject of many research studies. The gamut of studies has shown that the landscape of algorithmic methods is constantly improving with each iteration from its inception. With the increase in the availability of high-quality data, more training sets will allow for higher fidelity models. However, logistical and ethical concerns over a prospective trial comparing prognostic abilities of DL and physicians severely limit the ability of this technology to be widely adopted. One of the medical tenets is judgment, a facet of medical decision making in DL that is often missing because of its inherent nature as a black box. A natural distrust for newer technology, combined with a lack of autonomy that is normally expected in our current medical practices, is just one of several important limitations in implementation. In our review, we will first define and outline the different types of artificial intelligence (AI) as well as the role of AI in the current advances of clinical medicine. We briefly highlight several of the salient studies using different methods of DL in the realm of neuroradiology and summarize the key findings and challenges faced when using this nascent technology, particularly ethical challenges that could be faced by users of DL

    GBM Volumetry using the 3D Slicer Medical Image Computing Platform

    Get PDF
    Volumetric change in glioblastoma multiforme (GBM) over time is a critical factor in treatment decisions. Typically, the tumor volume is computed on a slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer – a free platform for biomedical research – provides an alternative to this manual slice-by-slice segmentation process, which is significantly faster and requires less user interaction. In this study, 4 physicians segmented GBMs in 10 patients, once using the competitive region-growing based GrowCut segmentation module of Slicer, and once purely by drawing boundaries completely manually on a slice-by-slice basis. Furthermore, we provide a variability analysis for three physicians for 12 GBMs. The time required for GrowCut segmentation was on an average 61% of the time required for a pure manual segmentation. A comparison of Slicer-based segmentation with manual slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43 ± 5.23% and a Hausdorff Distance of 2.32 ± 5.23 mm

    Brain Tumor Segmentation from Multi-Spectral Magnetic Resonance Image Data Using an Ensemble Learning Approach

    Get PDF
    The automatic segmentation of medical images represents a research domain of high interest. This paper proposes an automatic procedure for the detection and segmentation of gliomas from multi-spectral MRI data. The procedure is based on a machine learning approach: it uses ensembles of binary decision trees trained to distinguish pixels belonging to gliomas to those that represent normal tissues. The classification employs 100 computed features beside the four observed ones, including morphological, gradients and Gabor wavelet features. The output of the decision ensemble is fed to morphological and structural post-processing, which regularize the shape of the detected tumors and improve the segmentation quality. The proposed procedure was evaluated using the BraTS 2015 train data, both the high-grade (HG) and the low-grade (LG) glioma records. The highest overall Dice scores achieved were 86.5% for HG and 84.6% for LG glioma volumes
    • 

    corecore