9 research outputs found

    Brain Tumor Segmentation Methods based on MRI images: Review Paper

    Get PDF
    Statistically, incidence rate of brain tumors for women is 26.55 per 100,000 and this rate for men is 22.37 per 100,000 on average. The most dangerous occurring type of these tumors are known as Gliomas. The form of cancerous tumors so-called Glioblastomas are so aggressive that patients between ages 40 to 64 have only a 5.3% chance with a 5-year survival rate. In addition, it mostly depends on treatment course procedures since 331 to 529 is median survival time that shows how this class is commonly severe form of brain cancer. Unfortunately, a mean expenditure of glioblastoma costs 100,000$. Due to high mortality rates, gliomas and glioblastomas should be determined and diagnosed accurately to follow early stages of those cases. However, a method which is suitable to diagnose a course of treatment and screen deterministic features including location, spread and volume is multimodality magnetic resonance imaging for gliomas. The tumor segmentation process is determined through the ability to advance in computer vision. More precisely, CNN (convolutional neural networks) demonstrates stable and effective outcomes similar to other automated methods in terms of tumor segmentation algorithms. However, I will present all methods separately to specify effectiveness and accuracy of segmentation of tumor. Also, most commonly known techniques based on GANs (generative adversarial networks) have an advantage in some domains to analyze nature of manual segmentations.

    Deep learning-based brain tumour image segmentation and its extension to stroke lesion segmentation

    Get PDF
    Medical imaging plays a very important role in clinical methods of treating cancer, as well as treatment selection, diagnosis, an evaluating the response to therapy. One of the best-known acquisition modalities is magnetic resonance imaging (MRI), which is used widely in the analysis of brain tumours by means of acquisition protocols (e.g. conventional and advanced). Due to the wide variation in the shape, location and appearance of tumours, automated segmentation in MRI is a difficult task. Although many studies have been conducted, automated segmentation is difficult and work to improve the accuracy of tumour segmentation is still ongoing. This research aims to develop fully automated methods for segmenting the abnormal tissues associated with brain tumours (i.e. those subject to oedema, necrosis and enhanced) from the multimodal MRI images that help radiologists to diagnose conditions and plan treatment. In this thesis the machine-learned features from the deep learning convolutional neural network (CIFAR) are investigated and joined with hand-crafted histogram texture features to encode global information and local dependencies in the representation of features. The combined features are then applied in a decision tree (DT) classifier to group individual pixels into normal brain tissues and the various parts of a tumour. These features are good point view for the clinicians to accurately visualize the texture tissue of tumour and sub-tumour regions. To further improve the segmentation of tumour and sub-tumour tissues, 3D datasets of the four MRI modalities (i.e. FLAIR, T1, T1ce and T2) are used and fully convolutional neural networks, called SegNet, are constructed for each of these four modalities of images. The outputs of these four SegNet models are then fused by choosing the one with the highest scores to construct feature maps, with the pixel intensities as an input to a DT classifier to further classify each pixel as either a normal brain tissue or the component parts of a tumour. To achieve a high-performance accuracy in the segmentation as a whole, deep learning (the IV SegNet network) and the hand-crafted features are combined, particularly in the grey-level co-occurrence matrix (GLCM) in the region of interest (ROI) that is initially detected from FLAIR modality images using the SegNet network. The methods that have been developed in this thesis (i.e. CIFAR _PI_HIS _DT, SegNet_Max_DT and SegNet_GLCM_DT) are evaluated on two datasets: the first is the publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2017 dataset, and the second is a clinical dataset. In brain tumour segmentation methods, the F-measure performance of more than 0.83 is accepted, or at least useful from a clinical point of view, for segmenting the whole tumour structure which represents the brain tumour boundaries. Thanks to it, our proposed methods show promising results in the segmentation of brain tumour structures and they provide a close match to expert delineation across all grades of glioma. To further detect brain injury, these three methods were adopted and exploited for ischemic stroke lesion segmentation. In the steps of training and evaluation, the publicly available Ischemic Stroke Lesion (ISLES 2015) dataset and a clinical dataset were used. The performance accuracies of the three developed methods in ischemic stroke lesion segmentation were assessed. The third segmentation method (SegNet_GLCM_DT) was found to be more accurate than the other two methods (SegNet_Max_DT and SegNet_GLCM_DT) because it exploits GLCM as a set of hand-crafted features with machine features, which increases the accuracy of segmentation with ischemic stroke lesion

    Development of a 3D Mouse Atlas Tool for Improved Non-Invasive Imaging of Orthotopic Mouse Models of Pancreatic Cancer.

    Get PDF
    PhD ThesesPancreatic cancer is the 10th most common cancer in the UK with 10,000 people a year being diagnosed. This form of cancer also has one of the lowest survival rates, with only 5% of patient surviving for 5 years (1). There has not been significant progress in the treatment of pancreatic cancer for the last 30 years (1). Recognition of this historic lack of progress has led to an increase in research effort and funding aimed at developing novel treatments for pancreatic cancer. This in turn has had an inflationary effect on the numbers of animals being used to study the effects of these treatments. Genetically engineered mouse models (GEMMs) are currently thought to be most appropriate for these types of studies as the manner in which the mice develop pancreatic tumours is much closer to that seen in the clinic. One such GEMM is the K-rasLSL.G12D/+;p53R172H/+;PdxCre (KPC) model (2) in which the mouse is born with normal pancreas and then develops PanIN lesions (one of the main lesions linked to pancreatic ductal adenocarcinoma (PDAC) (2)) at an accelerated rate. The KPC model is immune competent and because the tumours develop orthotopically in the pancreas, they have a relevant microenvironment and stromal makeup, suitable for testing of new therapeutic approaches. Unlike the human pancreas which is regular in shape, the mouse pancreas is a soft and spongy organ that has its dimensions defined to a large extent by the position of the organs that surround it, such as the kidney, stomach and spleen (3). This changes as pancreatic tumours develop, with the elasticity of the pancreas decreasing as the tissue becomes more desmoplastic. Because the tumours are deep within the body, disease burden is difficult to assess except by sacrificing groups of animals or by using non-invasive imaging. Collecting data by sacrificing groups of animals at different timepoints results in use of very high numbers per study. This is in addition to the fact that in the KPC model (similar to other GEMMs), fewer than 25% have the desired genetic makeup, meaning that 3-4 animals are destroyed for every one that is put into study (2). Therefore, in order to reduce the numbers of animals used in 5 pancreatic research, a non-invasive imaging tool that allows accurate assessment of pancreatic tumour burden longitudinally over time has been developed. Magnetic resonance imaging (MRI) has been used as it is not operator dependent (allowing it to be used by non-experts) and does not use ionising radiation which is a potential confounding factor when monitoring tumour development. The tool has been developed for use with a low field instrument (1T) which ensures its universal applicability as it will perform even better when used with magnets of field strength higher than 1T. This work has been carried out starting from an existing 3D computational mouse atlas and developing a mathematical model that can automatically detect and segment mouse pancreas as well as pancreatic tumours in MRI images. This has been achieved using multiple image analysis techniques including thresholding, texture analysis, object detection, edge detection, multi-atlas segmentation, and machine learning. Through these techniques, unnecessary information is removed from the image, the area of analysis is reduced, the pancreas is isolated (and then classified healthy or unhealthy), and - if unhealthy - the pancreas is evaluated to identify tumour location and volume. This semi-automated approach aims to aid researchers by reducing image analysis time (especially for non-expert users) and increasing both objectivity and statistical accuracy. It facilitates the use of MRI as a method of longitudinally tracking tumour development and measuring response to therapy in the same animal, thus reducing biological variability and leading to a reduction in group size. The MR images of mice and pancreatic tumours used in this work were obtained through studies already being conducted in order to reduce the number of animals used without having to compromise on the validity of results

    Supervised learning-based multimodal MRI brain image analysis

    Get PDF
    Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy
    corecore