14 research outputs found

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Radiomics for Response Assessment after Stereotactic Radiotherapy for Lung Cancer

    Get PDF
    Stereotactic ablative radiotherapy (SABR) is a guideline-specified treatment option for patients with early stage non-small cell lung cancer. After treatment, patients are followed up regularly with computed tomography (CT) imaging to determine treatment response. However, benign radiographic changes to the lung known as radiation-induced lung injury (RILI) frequently occur. Due to the large doses delivered with SABR, these changes can mimic the appearance of a recurring tumour and confound response assessment. The objective of this work was to evaluate the accuracy of radiomics, for prediction of eventual local recurrence based on CT images acquired within 6 months of treatment. A semi-automatic decision support system was developed to segment and sample regions of common post-SABR changes, extract radiomic features and classify images as local recurrence or benign injury. Physician ability to detect timely local recurrence was also measured on CT imaging, and compared with that of the radiomics tool. Within 6 months post-SABR, physicians assessed the majority of images as no recurrence and had an overall lower accuracy compared to the radiomics system. These results suggest that radiomics can detect early changes associated with local recurrence that are not typically considered by physicians. These appearances detected by radiomics may be early indicators of the promotion and progression to local recurrence. This has the potential to lead to a clinically useful computer-aided decision support tool based on routinely acquired CT imaging, which could lead to earlier salvage opportunities for patients with recurrence and fewer invasive investigations of patients with only benign injury

    3D Lung Nodule Classification in Computed Tomography Images

    Get PDF
    Lung cancer is the leading cause of cancer death worldwide. One of the reasons is the absence of symptoms at an early stage, which means that it is only discovered at a later stage, where the treatment is more difficult [1]. Furthermore, when making a diagnosis, frequently done by reading computed tomographies (CT's), it is regularly allied with errors. One of the reasons is the variation of the opinion of the doctors regarding the diagnosis of the same nodule [2,3].The use of CADx, Computer-Aided Diagnosis, systems can be a great help for this problem by assisting doctors in diagnosis with a second opinion. Although its efficiency has already been proven [4], it often ends up not being used because doctors can not understand the "how and why" of CADx diagnostic results, and ultimately do not trust the system [5]. To increase the radiologists' confidence in the CADx system it is proposed that along with the results of malignancy prediction, there are also results with evidence that explains those malignancy results.There are some visible features in lung nodules that are correlated with malignancy. Since humans are able to visually identify these characteristics and correlate them with nodule malignancy, one way to present those evidence is to make predictions of those characteristics. To have these predictions it is proposed to use deep learning approaches. Convolutional neural networks had shown to outperform the state of the art results in medical image analysis [6]. To predict the characteristics and malignancy in CADx system, the architecture HSCNN, a deep hierarchical semantic convolutional neural network, proposed by Shen et al. [7], will be used.The Lung Image Database Consortium image collection (LIDC-IDRI) public dataset is frequently used as input for lung cancer CADx systems. The LIDC-IDRI consists of thoracic CT scans, presenting a lot of data's quantity and variability. In most of the nodules, this dataset has doctor's evaluations for 9 different characteristics. A recurrent problem in those evaluations is the subjectivity of the doctors' interpretation in what each characteristic is. In some characteristics, it can result in a great divergence in evaluations regarding the same nodule, which makes the inclusion of those evaluations as an input in CADx systems not useful as it could be. To reduce this subjectivity, it is proposed the creation of a metric that makes the characteristics classification more objective. For this, it is planned bibliographic and LIDC-IDRI dataset reviews. With that, taking into account this new metric, validated after by doctors from Hospital de SĂŁo JoĂŁo, will be made a reclassification in LIDC-IDRI dataset. This way it could be possible to use as input all the relevant characteristics. The principal objective of this dissertation is to develop a lung nodule CADx system methodology which promotes the confidence of specialists in its use. This will be made classifying lung nodules according to relevant characteristics to diagnosis and malignancy. The reclassified LIDC-IDRI dataset will be used as an input for CADx system and the architecture used for predicting the characteristics and malignancy results will be the HSCNN. To measure the classification evaluation will be used sensitivity, sensibility, and area under the Receiver Operating Characteristic (ROC), curve. The proposed solution may be used for improving a CADx system, LNDetector, currently in development by the Center for Biomedical Engineering Research (C-BER) group from INESC-TEC in which this work will be developed.[1] - S. Sone M. Hasegawa and S. Takashima. Growth rate of small lung cancels detected on mass ct screening. Tire British Journal of Radiology, pages 1252-1259[2] - D. J. Bell S. E. Marley P. Guo H. Mann M. L. Scott L. H. Schwartz D. C. Ghiorghiu B. Zhao, Y. Tan. Exploring intra-and inter-reader variability in uni-dimensional, bi-dimensional, and volumetric measurements of solid tumors on ct scans reconstructed at different slice intervals. European journal of radiology 82, page 959-968, 2013[3] - H.T Winer-Muram. The solitary pulmonary nodule 1. Radiology, 239, pages 39-49, 2006.[4] - R. Yan J. Lee L. C. Chu C. T. Lin A. Hussien J. Rathmell B. Thomas C. Chen et al. P. Huang, S. Park. Added value of computer-aided ct image features for early lung cancer diagnosis with small pulmonary nodules: A matched case-control study. Radiology 286, page 286-295, 2017[5] - W Jorritsma, Fokie Cnossen, and Peter Van Ooijen. Improving the radiologist-cad interaction: Designing for appropriate trust. Clinical Radiology, 70, 10 2014.[6] - Tom Brosch, Youngjin Yoo, David Li, Anthony Traboulsee, and Roger Tam. Modeling the variability in brain morphology and lesion distribution in multiple sclerosis by deep learning. Volume 17, 09 2014.[7] - Simon Aberle Deni A. T. Bui Alex Hsu Willliam Shen, Shiwen X. Han. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. june 201

    Detection and description of pulmonary nodules through 2D and 3D clustering

    Get PDF
    Precise 3D automated detection, description and classification of pulmonary nodules offer the potential for early diagnosis of cancer and greater efficiency in the reading of computerised tomography (CT) images. CT scan centres are currently experiencing high loads and experts shortage, especially in developing countries such as Iraq where the results of the current research will be used. This motivates the researchers to address these problems and challenges by developing automated processes for the early detection and efficient description of cancer cases. This research attempts to reduce workloads, enhance the patient throughput and improve the diagnosis performance. To achieve this goal, the study selects techniques for segmentation, classification, detection and implements the best candidates alongside a novel automated approach. Techniques for each stage in the process are quantitatively evaluated to select the best performance against standard data for lung cancer. In addition, the ideal approach is identified by comparing them against other works in detecting and describing pulmonary nodules. This work detects and describes the nodules and their characteristics in several stages: automated lung segmentation from the background, automated 2D and 3D clustering of vessels and nodules, applying shape and textures features, classification and automatic measurement of nodule characteristics. This work is tested on standard CT lung image data and shows promising results, matching or close to experts’ diagnosis in the nodules number and their features (size/volume, location) and in terms the accuracy and automation. It also achieved a classification accuracy of 98% and efficient results in measuring the nodules’ volume automatically

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    A Modular Approach to Lung Nodule Detection from Computed Tomography Images Using Artificial Neural Networks and Content Based Image Representation

    Get PDF
    Lung cancer is one of the most lethal cancer types. Research in computer aided detection (CAD) and diagnosis for lung cancer aims at providing effective tools to assist physicians in cancer diagnosis and treatment to save lives. In this dissertation, we focus on developing a CAD framework for automated lung cancer nodule detection from 3D lung computed tomography (CT) images. Nodule detection is a challenging task that no machine intelligence can surpass human capability to date. In contrast, human recognition power is limited by vision capacity and may suffer from work overload and fatigue, whereas automated nodule detection systems can complement expert’s efforts to achieve better detection performance. The proposed CAD framework encompasses several desirable properties such as mimicking physicians by means of geometric multi-perspective analysis, computational efficiency, and the most importantly producing high performance in detection accuracy. As the central part of the framework, we develop a novel hierarchical modular decision engine implemented by Artificial Neural Networks. One advantage of this decision engine is that it supports the combination of spatial-level and feature-level information analysis in an efficient way. Our methodology overcomes some of the limitations of current lung nodule detection techniques by combining geometric multi-perspective analysis with global and local feature analysis. The proposed modular decision engine design is flexible to modifications in the decision modules; the engine structure can adopt the modifications without having to re-design the entire system. The engine can easily accommodate multi-learning scheme and parallel implementation so that each information type can be processed (in parallel) by the most adequate learning technique of its own. We have also developed a novel shape representation technique that is invariant under rigid-body transformation and we derived new features based on this shape representation for nodule detection. We implemented a prototype nodule detection system as a demonstration of the proposed framework. Experiments have been conducted to assess the performance of the proposed methodologies using real-world lung CT data. Several performance measures for detection accuracy are used in the assessment. The results show that the decision engine is able to classify patterns efficiently with very good classification performance
    corecore