57 research outputs found

    Quantification of tumour heterogenity in MRI

    Get PDF
    Cancer is the leading cause of death that touches us all, either directly or indirectly. It is estimated that the number of newly diagnosed cases in the Netherlands will increase to 123,000 by the year 2020. General Dutch statistics are similar to those in the UK, i.e. over the last ten years, the age-standardised incidence rate1 has stabilised at around 355 females and 415 males per 100,000. Figure 1 shows the cancer incidence per gender. In the UK, the rise in lifetime risk of cancer is more than one in three and depends on many factors, including age, lifestyle and genetic makeup

    Machine Learning Models for Multiparametric Glioma Grading With Quantitative Result Interpretations

    Get PDF
    Gliomas are the most common primary malignant brain tumors in adults. Accurate grading is crucial as therapeutic strategies are often disparate for different grades and may influence patient prognosis. This study aims to provide an automated glioma grading platform on the basis of machine learning models. In this paper, we investigate contributions of multi-parameters from multimodal data including imaging parameters or features from the Whole Slide images (WSI) and the proliferation marker Ki-67 for automated brain tumor grading. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. On the basis of machine learning models, our platform classifies gliomas into grades II, III, and IV. Furthermore, we quantitatively interpret and reveal the important parameters contributing to grading with the Local Interpretable Model-Agnostic Explanations (LIME) algorithm. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. The performance of our grading model was evaluated with cross-validation, which randomly divided the patients into non-overlapping training and testing sets and repeatedly validated the model on the different testing sets. The primary results indicated that this modular platform approach achieved the highest grading accuracy of 0.90 ± 0.04 with support vector machine (SVM) algorithm, with grading accuracies of 0.91 ± 0.08, 0.90 ± 0.08, and 0.90 ± 0.07 for grade II, III, and IV gliomas, respectively

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    A new feature-based wavelet completed local ternary pattern (FEAT-WCLTP) for texture and medical image classification

    Get PDF
    Nowadays, texture image descriptors are used in many important real-life applications. The use of texture analysis in texture and medical image classification has attracted considerable attention. Local Binary Patterns (LBP) is one of the simplest yet eff ective texture descriptors. But it has some limitations that may affect its accuracy. Hence, different variants of LBP were proposed to overcome LBP’s drawbacks and enhance its classification accuracy. Completed local ternary pattern (CLTP) is one of the significant LBP variants. However, CLTP suffers from two main limitations: the selection of the threshold value is manually based and the high dimensionality which is negatively affected the descriptor performance and leads to high computations. This research aims to improve the classification accuracy of CLTP and overcome the computational limitation by proposing new descriptors inspired by CLTP. Therefore, this research introduces two contributions: The first one is a proposed new descriptor that integrates redundant discrete wavelet transform (RDWT) with the original CLTP, namely, wavelet completed local ternary pattern (WCLTP). Extracting CLTP in wavelet transform will help increase the classification accuracy due to the shift invariant property of RDWT. Firstly, the image is decomposed into four sub-bands (LL, LH, HL, HH) by using RDWT. Then, CLTP is extracted based on the LL wavelet coefficients. The latter one is the reduction in the dimensionality of WCLTP by reducing its size and a proposed new texture descriptor, namely, feature-based wavelet completed local ternary pattern (FeatWCLTP). The proposed Feat-WCLTP can enhance CLTP’s performance and reduce high dimensionality. The mean and variance of the values of the selected texture pattern are used instead of the normal magnitude texture descriptor of CLTP. The performance of the proposed WCLTP and Feat-WCLTP was evaluated using four textures (i.e. OuTex, CUReT, UIUC and Kylberg) and two medical (i.e. 2D HeLa and Breast Cancer) datasets then compared with several well-known LBP variants. The proposed WCLTP outperformed the previous descriptors and achieved the highest classification accuracy in all experiments. The results for the texture dataset are 99.35% in OuTex, 96.57% in CUReT, 94.80% in UIUC and 99.88% in the Kylberg dataset. The results for the medical dataset are 84.19% in the 2D HeLa dataset and 92.14% in the Breast Cancer dataset. The proposed Feat-WCLTP not only overcomes the dimensionality problem but also considerably improves the classification accuracy. The results for Feat-WCLTP for texture dataset are 99.66% in OuTex, 96.89% in CUReT, 95.23% in UIUC and 99.92% in the Kylberg dataset. The results for the medical dataset are 84.42% in the 2D HeLa dataset and 89.12% in the Breast Cancer dataset. Moreover, the proposed Feat-WCLTP reduces the size of the feature vector for texture pattern (1,8) to 160 bins instead of 400 bins in WCLTP. The proposed WCLTP and Feat-WCLTP have better classification accuracy and dimensionality than the original CLTP

    Clinical decision support system, a potential solution for diagnostic accuracy improvement in oral squamous cell carcinoma: A systematic review

    Get PDF
    BACKGROUND AND AIM: Oral squamous cell carcinoma (OSCC) is a rapidly progressive disease and despite the progress in the treatment of cancer, remains a life-threatening illness with a poor prognosis. Diagnostic techniques of the oral cavity are not painful, non-invasive, simple and inexpensive methods. Clinical decision support systems (CDSSs) are the most important diagnostic technologies used to help health professionals to analyze patients’ data and make decisions. This paper, by studying CDSS applications in the process of providing care for the cancer patients, has looked into the CDSS potentials in OSCC diagnosis. METHODS: We retrieved relevant articles indexed in MEDLINE/PubMed database using high-quality keywords. First, the title and then the abstract of the related articles were reviewed in the step of screening. Only research articles which had designed clinical decision support system in different stages of providing care for the cancer patient were retained in this study according to the input criteria. RESULTS: Various studies have been conducted about the important roles of CDSS in health processes related to different types of cancer. According to the aim of studies, we categorized them into several groups including treatment, diagnosis, risk assessment, screening, and survival estimation. CONCLUSION: Successful experiences in the field of CDSS applications in different types of cancer have indicated that machine learning methods have a high potential to manage the data and diagnostic improvement in OSCC intelligently and accurately. KEYWORDS: Squamous Cell Carcinoma; Clinical Decision Support System; Neoplasm; Dental Informatic
    • …
    corecore