224 research outputs found

    Full-resolution Lung Nodule Segmentation from Chest X-ray Images using Residual Encoder-Decoder Networks

    Full text link
    Lung cancer is the leading cause of cancer death and early diagnosis is associated with a positive prognosis. Chest X-ray (CXR) provides an inexpensive imaging mode for lung cancer diagnosis. Suspicious nodules are difficult to distinguish from vascular and bone structures using CXR. Computer vision has previously been proposed to assist human radiologists in this task, however, leading studies use down-sampled images and computationally expensive methods with unproven generalization. Instead, this study localizes lung nodules using efficient encoder-decoder neural networks that process full resolution images to avoid any signal loss resulting from down-sampling. Encoder-decoder networks are trained and tested using the JSRT lung nodule dataset. The networks are used to localize lung nodules from an independent external CXR dataset. Sensitivity and false positive rates are measured using an automated framework to eliminate any observer subjectivity. These experiments allow for the determination of the optimal network depth, image resolution and pre-processing pipeline for generalized lung nodule localization. We find that nodule localization is influenced by subtlety, with more subtle nodules being detected in earlier training epochs. Therefore, we propose a novel self-ensemble model from three consecutive epochs centered on the validation optimum. This ensemble achieved a sensitivity of 85% in 10-fold internal testing with false positives of 8 per image. A sensitivity of 81% is achieved at a false positive rate of 6 following morphological false positive reduction. This result is comparable to more computationally complex systems based on linear and spatial filtering, but with a sub-second inference time that is faster than other methods. The proposed algorithm achieved excellent generalization results against an external dataset with sensitivity of 77% at a false positive rate of 7.6

    Ensemble Methods for Lung Cancer Gene Mutation Prediction

    Get PDF
    Previous results from the project "Lung Cancer Screening - A non-invasive methodology for early diagnosis" and literature suggest that the most relevant information to predict the mutation status in lung cancer might be the combination of features from the nodule and other lung structures. Quantitative features extracted from cancer nodules have been used to create predictive models for gene mutation status and screening. Novel ensemble methods will be developed in order to use quantitative features from external structures to the nodule with traditional features from the nodule. The combination of relevant information by the learning models should improve the accuracy of diagnosis

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Automated decision support system for lung cancer detection and classification via enhanced RFCN with multilayer fusion RPN

    Get PDF
    Detection of lung cancer at early stages is critical, radiologists read computed tomography (CT) images to prescribe follow-up treatment. The conventional method for detecting nodule presence in CT images is tedious. We propose an enhanced multidimensional Region-based Fully Convolutional Network (mRFCN) based automated decision support system for lung nodule detection and classification. The mRFCN is used as an image classifier backbone for feature extraction along with the novel multi-Layer fusion Region Proposal Network (mLRPN) with position-sensitive score maps (PSSM) being explored. We applied a median intensity projection to leverage three-dimensional information from CT scans and introduced deconvolutional layer to adopt proposed mLRPN in our architecture to automatically select potential region-of-interest. Our system has been trained and evaluated using LIDC dataset, and the experimental results showed the promising detection performance in comparison to the state-of-the-art nodule detection/classification methods, achieving a sensitivity of 98.1% and classification accuracy of 97.91%

    Deep learning for lung cancer analysis

    Get PDF
    This thesis describes the development and evaluation of two novel deep learning applications that tackle two cancers that affect the lungs. The first, lung cancer, is the largest cause of cancer-related deaths in the United Kingdom. It accounts for more than 1 in 5 cancer deaths; around 35,000 people every year. Lung cancer is curable providing it is detected very early. Computed tomography (CT) X-ray imaging has been shown to be effective for screening. However, the resulting 3D medical images are laborious for humans to read, and widespread adoption would put a huge strain on already stretched radiology departments. I developed a novel deep learning based approach for the automatic detection of lung nodules, potential early lung cancer, that has potential to reduce human workloads. It was evaluated on two independent datasets, and achieves performance competitive with published state-of-the-art tools, with average sensitivity of 84% to 92% at 8 false positives per scan. I developed a related invention which allows hierarchical relationships to be leveraged to improve the performance of CAD tools like this for detection and segmentation tasks. The second cancer is malignant pleural mesothelioma. It is very different from lung cancer: rather than growing within the lung, mesothelioma grows around the outside of the lung in the pleural cavity, like the rind on an orange. It is a rare cancer, caused by exposure to asbestos fibres. It can take decades from exposure to symptoms developing. In Glasgow many mesothelioma patients worked in the ship-building industry, which relied heavily on asbestos. Although asbestos has been banned in the UK since 1999, its presence in buildings and equipment built before then mean that mesothelioma will remain a problem for years to come. Sadly, asbestos is still being mined and many countries, including the United States, have still not instigated a complete ban. For mesothelioma the main challenge is not detection, but accurate measurement —- without the ability to measure tumour size it is difficult to evaluate potential treatments. We therefore developed a fully automated volumetric assessment of malignant pleural mesothelioma. Performance of the algorithm is shown on a multi-centre test set, where volumetric predictions are highly correlated with an expert annotator (r=0.851, p<0.0001). Region overlap scores between the automated method and an expert annotator exceed those for inter-annotator agreement across a subset of cases. Dice overlap scores of 0.64 and 0.55, by cross-validation and independent testing respectively, were achieved. Future work will progress this algorithm towards clinical deployment for the automated assessment of longitudinal tumour development

    Capsule Network-based Radiomics: From Diagnosis to Treatment

    Get PDF
    Recent advancements in signal processing and machine learning coupled with developments of electronic medical record keeping in hospitals have resulted in a surge of significant interest in ``radiomics". Radiomics is an emerging and relatively new research field, which refers to semi-quantitative and/or quantitative features extracted from medical images with the goal of developing predictive and/or prognostic models. Radiomics is expected to become a critical component for integration of image-derived information for personalized treatment in the near future. The conventional radiomics workflow is, typically, based on extracting pre-designed features (also referred to as hand-crafted or engineered features) from a segmented region of interest. Clinical application of hand-crafted radiomics is, however, limited by the fact that features are pre-defined and extracted without taking the desired outcome into account. The aforementioned drawback has motivated trends towards development of deep learning-based radiomics (also referred to as discovery radiomics). Discovery radiomics has the advantage of learning the desired features on its own in an end-to-end fashion. Discovery radiomics has several applications in disease prediction/ diagnosis. Through this Ph.D. thesis, we develop deep learning-based architectures to address the following critical challenges identified within the radiomics domain. First, we cover the tumor type classification problem, which is of high importance for treatment selection. We address this problem, by designing a Capsule network-based architecture that has several advantages over existing solutions such as eliminating the need for access to a huge amount of training data, and its capability to learn input transformations on its own. We apply different modifications to the Capsule network architecture to make it more suitable for radiomics. At one hand, we equip the proposed architecture with access to the tumor boundary box, and on the other hand, a multi-scale Capsule network architecture is designed. Furthermore, capitalizing on the advantages of ensemble learning paradigms, we design a boosting and also a mixture of experts capsule network. A Bayesian capsule network is also developed to capture the uncertainty of the tumor classification. Beside knowing the tumor type (through classification), predicting the patient's response to treatment plays an important role in treatment design. Predicting patient's response, including survival and tumor recurrence, is another goal of this thesis, which we address by designing a deep learning-based model that takes not only the medical images, but also different clinical factors (such as age and gender) as inputs. Finally, COVID-19 diagnosis, another challenging and crucial problem within the radiomics domain, is dealt with using both X-ray and Computed Tomography (CT) images (in particular low-dose ones), where two in-house datasets are collected for the latter and different capsule network-based models are developed for COVID-19 diagnosis
    corecore