1,187 research outputs found
Topology polymorphism graph for lung tumor segmentation in PET-CT images
Accurate lung tumor segmentation is problematic when the tumor boundary or edge, which reflects the advancing edge of the tumor, is difficult to discern on chest CT or PET. We propose a ‘topo-poly’ graph model to improve identification of the tumor extent. Our model incorporates an intensity graph and a topology graph. The intensity graph provides the joint PET-CT foreground similarity to differentiate the tumor from surrounding tissues. The topology graph is defined on the basis of contour tree to reflect the inclusion and exclusion relationship of regions. By taking into account different topology relations, the edges in our model exhibit topological polymorphism. These polymorphic edges in turn affect the energy cost when crossing different topology regions under a random walk framework, and hence contribute to appropriate tumor delineation. We validated our method on 40 patients with non-small cell lung cancer where the tumors were manually delineated by a clinical expert. The studies were separated into an ‘isolated’ group (n = 20) where the lung tumor was located in the lung parenchyma and away from associated structures / tissues in the thorax and a ‘complex’ group (n = 20) where the tumor abutted / involved a variety of adjacent structures and had heterogeneous FDG uptake. The methods were validated using Dice’s similarity coefficient (DSC) to measure the spatial volume overlap and Hausdorff distance (HD) to compare shape similarity calculated as the maximum surface distance between the segmentation results and the manual delineations. Our method achieved an average DSC of 0.881  ±  0.046 and HD of 5.311  ±  3.022 mm for the isolated cases and DSC of 0.870  ±  0.038 and HD of 9.370  ±  3.169 mm for the complex cases. Student’s t-test showed that our model outperformed the other methods (p-values <0.05)
Learning Algorithms for Fat Quantification and Tumor Characterization
Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice
Unsupervised supervoxel-based lung tumor segmentation across patient scans in hybrid PET/MRI
Tumor segmentation is a crucial but difficult task in treatment planning and follow-up of cancerous patients. The
challenge of automating the tumor segmentation has recently received a lot of attention, but the potential of
utilizing hybrid positron emission tomography (PET)/magnetic resonance imaging (MRI), a novel and promising
imaging modality in oncology, is still under-explored. Recent approaches have either relied on manual user input
and/or performed the segmentation patient-by-patient, whereas a fully unsupervised segmentation framework
that exploits the available information from all patients is still lacking.
We present an unsupervised across-patients supervoxel-based clustering framework for lung tumor segmentation in hybrid PET/MRI. The method consists of two steps: First, each patient is represented by a set of PET/
MRI supervoxel-features. Then the data points from all patients are transformed and clustered on a population
level into tumor and non-tumor supervoxels. The proposed framework is tested on the scans of 18 non-small cell
lung cancer patients with a total of 19 tumors and evaluated with respect to manual delineations provided by
clinicians. Experiments study the performance of several commonly used clustering algorithms within the
framework and provide analysis of (i) the effect of tumor size, (ii) the segmentation errors, (iii) the benefit of
across-patient clustering, and (iv) the noise robustness.
The proposed framework detected 15 out of 19 tumors in an unsupervised manner. Moreover, performance
increased considerably by segmenting across patients, with the mean dice score increasing from 0.169 ± 0.295
(patient-by-patient) to 0.470 ± 0.308 (across-patients). Results demonstrate that both spectral clustering and
Manhattan hierarchical clustering have the potential to segment tumors in PET/MRI with a low number of
missed tumors and a low number of false-positives, but that spectral clustering seems to be more robust to noise
Advanced machine learning methods for oncological image analysis
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
The Liver Tumor Segmentation Benchmark (LiTS)
In this work, we report the set-up and results of the Liver Tumor
Segmentation Benchmark (LITS) organized in conjunction with the IEEE
International Symposium on Biomedical Imaging (ISBI) 2016 and International
Conference On Medical Image Computing Computer Assisted Intervention (MICCAI)
2017. Twenty four valid state-of-the-art liver and liver tumor segmentation
algorithms were applied to a set of 131 computed tomography (CT) volumes with
different types of tumor contrast levels (hyper-/hypo-intense), abnormalities
in tissues (metastasectomie) size and varying amount of lesions. The submitted
algorithms have been tested on 70 undisclosed volumes. The dataset is created
in collaboration with seven hospitals and research institutions and manually
reviewed by independent three radiologists. We found that not a single
algorithm performed best for liver and tumors. The best liver segmentation
algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation
the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image
data and manual annotations continue to be publicly available through an online
evaluation system as an ongoing benchmarking resource.Comment: conferenc
Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images
Positron emission tomography (PET)-Computed tomography (CT) plays an important role in
cancer management. As a multi-modal imaging technique it provides both functional and anatomical
information of tumor spread. Such information improves cancer treatment in many ways. One
important usage of PET-CT in cancer treatment is to facilitate radiotherapy planning, for the information
it provides helps radiation oncologists to better target the tumor region. However, currently
most tumor delineations in radiotherapy planning are performed by manual segmentation, which
consumes a lot of time and work. Most computer-aided algorithms need a knowledgeable user to
locate roughly the tumor area as a starting point. This is because, in PET-CT imaging, some tissues
like heart and kidney may also exhibit a high level of activity similar to that of a tumor region. In
order to address this issue, a novel co-segmentation method is proposed in this work to enhance
the accuracy of tumor segmentation using PET-CT, and a localization algorithm is developed to
differentiate and segment tumor regions from normal regions. On a combined dataset containing
29 patients with lung tumor, the combined method shows good segmentation results as well as
good tumor recognition rate
- …