3,378 research outputs found

    Fully Automatic MRI Brain Tumor Segmentation

    Get PDF
    Today in the area of medical research, the care of brain tumor patient attracts a lot of attention. Brain tumor segmentation consists of separating the different brain tumor tissues from normal tissues. In the past, many researchers in the field of medical imaging and soft computing have made significant survey in the field of brain tumor segmentation. Both semiautomatic and fully automatic methods have been proposed. Clinical acceptance of segmentation techniques has depended on the simplicity of the segmentation and the degree of user supervision. Additionally, with the development of particular software tools for automatic segmentation and brain tumor detection, which reduce the doctors’ time spent on manual segmentation, more effective and efficient results are provided. In this paper BraTumIA software tool has been used for automated segmentation on MRI brain tumor images in order to perform fully segmentation by separating different brain tumor tissues from the normal ones

    Segmentation of brain tumors in MRI images using three-dimensional active contour without edge

    Get PDF
    Brain tumor segmentation in magnetic resonance imaging (MRI) is considered a complex procedure because of the variability of tumor shapes and the complexity of determining the tumor location, size, and texture. Manual tumor segmentation is a time-consuming task highly prone to human error. Hence, this study proposes an automated method that can identify tumor slices and segment the tumor across all image slices in volumetric MRI brain scans. First, a set of algorithms in the pre-processing stage is used to clean and standardize the collected data. A modified gray-level co-occurrence matrix and Analysis of Variance (ANOVA) are employed for feature extraction and feature selection, respectively. A multi-layer perceptron neural network is adopted as a classifier, and a bounding 3D-box-based genetic algorithm is used to identify the location of pathological tissues in the MRI slices. Finally, the 3D active contour without edge is applied to segment the brain tumors in volumetric MRI scans. The experimental dataset consists of 165 patient images collected from the MRI Unit of Al-Kadhimiya Teaching Hospital in Iraq. Results of the tumor segmentation achieved an accuracy of 89% +/- 4.7% compared with manual processes

    Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels

    Get PDF
    BACKGROUND: Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. METHODS: We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. RESULTS: The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. CONCLUSION: The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management

    An Unpaired Cross-modality Segmentation Framework Using Data Augmentation and Hybrid Convolutional Networks for Segmenting Vestibular Schwannoma and Cochlea

    Full text link
    The crossMoDA challenge aims to automatically segment the vestibular schwannoma (VS) tumor and cochlea regions of unlabeled high-resolution T2 scans by leveraging labeled contrast-enhanced T1 scans. The 2022 edition extends the segmentation task by including multi-institutional scans. In this work, we proposed an unpaired cross-modality segmentation framework using data augmentation and hybrid convolutional networks. Considering heterogeneous distributions and various image sizes for multi-institutional scans, we apply the min-max normalization for scaling the intensities of all scans between -1 and 1, and use the voxel size resampling and center cropping to obtain fixed-size sub-volumes for training. We adopt two data augmentation methods for effectively learning the semantic information and generating realistic target domain scans: generative and online data augmentation. For generative data augmentation, we use CUT and CycleGAN to generate two groups of realistic T2 volumes with different details and appearances for supervised segmentation training. For online data augmentation, we design a random tumor signal reducing method for simulating the heterogeneity of VS tumor signals. Furthermore, we utilize an advanced hybrid convolutional network with multi-dimensional convolutions to adaptively learn sparse inter-slice information and dense intra-slice information for accurate volumetric segmentation of VS tumor and cochlea regions in anisotropic scans. On the crossMoDA2022 validation dataset, our method produces promising results and achieves the mean DSC values of 72.47% and 76.48% and ASSD values of 3.42 mm and 0.53 mm for VS tumor and cochlea regions, respectively.Comment: Accepted by BrainLes MICCAI proceeding

    HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation

    Full text link
    Recently, dense connections have attracted substantial attention in computer vision because they facilitate gradient flow and implicit deep supervision during training. Particularly, DenseNet, which connects each layer to every other layer in a feed-forward fashion, has shown impressive performances in natural image classification tasks. We propose HyperDenseNet, a 3D fully convolutional neural network that extends the definition of dense connectivity to multi-modal segmentation problems. Each imaging modality has a path, and dense connections occur not only between the pairs of layers within the same path, but also between those across different paths. This contrasts with the existing multi-modal CNN approaches, in which modeling several modalities relies entirely on a single joint layer (or level of abstraction) for fusion, typically either at the input or at the output of the network. Therefore, the proposed network has total freedom to learn more complex combinations between the modalities, within and in-between all the levels of abstraction, which increases significantly the learning representation. We report extensive evaluations over two different and highly competitive multi-modal brain tissue segmentation challenges, iSEG 2017 and MRBrainS 2013, with the former focusing on 6-month infant data and the latter on adult images. HyperDenseNet yielded significant improvements over many state-of-the-art segmentation networks, ranking at the top on both benchmarks. We further provide a comprehensive experimental analysis of features re-use, which confirms the importance of hyper-dense connections in multi-modal representation learning. Our code is publicly available at https://www.github.com/josedolz/HyperDenseNet.Comment: Paper accepted at IEEE TMI in October 2018. Last version of this paper updates the reference to the IEEE TMI paper which compares the submissions to the iSEG 2017 MICCAI Challeng
    • …
    corecore