4 research outputs found

    Deep learning-based brain tumour image segmentation and its extension to stroke lesion segmentation

    Get PDF
    Medical imaging plays a very important role in clinical methods of treating cancer, as well as treatment selection, diagnosis, an evaluating the response to therapy. One of the best-known acquisition modalities is magnetic resonance imaging (MRI), which is used widely in the analysis of brain tumours by means of acquisition protocols (e.g. conventional and advanced). Due to the wide variation in the shape, location and appearance of tumours, automated segmentation in MRI is a difficult task. Although many studies have been conducted, automated segmentation is difficult and work to improve the accuracy of tumour segmentation is still ongoing. This research aims to develop fully automated methods for segmenting the abnormal tissues associated with brain tumours (i.e. those subject to oedema, necrosis and enhanced) from the multimodal MRI images that help radiologists to diagnose conditions and plan treatment. In this thesis the machine-learned features from the deep learning convolutional neural network (CIFAR) are investigated and joined with hand-crafted histogram texture features to encode global information and local dependencies in the representation of features. The combined features are then applied in a decision tree (DT) classifier to group individual pixels into normal brain tissues and the various parts of a tumour. These features are good point view for the clinicians to accurately visualize the texture tissue of tumour and sub-tumour regions. To further improve the segmentation of tumour and sub-tumour tissues, 3D datasets of the four MRI modalities (i.e. FLAIR, T1, T1ce and T2) are used and fully convolutional neural networks, called SegNet, are constructed for each of these four modalities of images. The outputs of these four SegNet models are then fused by choosing the one with the highest scores to construct feature maps, with the pixel intensities as an input to a DT classifier to further classify each pixel as either a normal brain tissue or the component parts of a tumour. To achieve a high-performance accuracy in the segmentation as a whole, deep learning (the IV SegNet network) and the hand-crafted features are combined, particularly in the grey-level co-occurrence matrix (GLCM) in the region of interest (ROI) that is initially detected from FLAIR modality images using the SegNet network. The methods that have been developed in this thesis (i.e. CIFAR _PI_HIS _DT, SegNet_Max_DT and SegNet_GLCM_DT) are evaluated on two datasets: the first is the publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2017 dataset, and the second is a clinical dataset. In brain tumour segmentation methods, the F-measure performance of more than 0.83 is accepted, or at least useful from a clinical point of view, for segmenting the whole tumour structure which represents the brain tumour boundaries. Thanks to it, our proposed methods show promising results in the segmentation of brain tumour structures and they provide a close match to expert delineation across all grades of glioma. To further detect brain injury, these three methods were adopted and exploited for ischemic stroke lesion segmentation. In the steps of training and evaluation, the publicly available Ischemic Stroke Lesion (ISLES 2015) dataset and a clinical dataset were used. The performance accuracies of the three developed methods in ischemic stroke lesion segmentation were assessed. The third segmentation method (SegNet_GLCM_DT) was found to be more accurate than the other two methods (SegNet_Max_DT and SegNet_GLCM_DT) because it exploits GLCM as a set of hand-crafted features with machine features, which increases the accuracy of segmentation with ischemic stroke lesion

    Image classification-based brain tumour tissue segmentation

    Get PDF
    Brain tumour tissue segmentation is essential for clinical decision making. While manual segmentation is time consuming, tedious, and subjective, it is very challenging to develop automatic segmentation methods. Deep learning with convolutional neural network (CNN) architecture has consistently outperformed previous methods on such challenging tasks. However, the local dependencies of pixel classes cannot be fully reflected in the CNN models. In contrast, hand-crafted features such as histogram-based texture features provide robust feature descriptors of local pixel dependencies. In this paper, a classification-based method for automatic brain tumour tissue segmentation is proposed using combined CNN-based and hand-crafted features. The CIFAR network is modified to extract CNN-based features, and histogram-based texture features are fused to compensate the limitation in the CIFAR network. These features together with the pixel intensities of the original MRI images are sent to a decision tree for classifying the MRI image voxels into different types of tumour tissues. The method is evaluated on the BraTS 2017 dataset. Experiments show that the proposed method produces promising segmentation results

    Simulation Recording of an ECG, PCG, and PPG for Feature Extractions

    No full text
    Recently, the development of the field of biomedical engineering has led to a renewed interest in detection of several events. In this paper a new approach used to detect specific parameter and relations between three biomedical signals that used in clinical diagnosis. These include the phonocardiography (PCG), electrocardiography (ECG) and photoplethysmography (PPG) or sometimes it called the carotid pulse related to the position of electrode. Comparisons between three cases (two normal cases and one abnormal case) are used to indicate the delay that may occurred due to the deficiency of the cardiac muscle or valve in an abnormal case. The results shown that S1 and S2, first and second sound of the heart respectively, can be determined from another signal like ECG. Moreover, the position of QRS complex and the end of T wave could be estimated by using PPG signal
    corecore