147 research outputs found
Esophageal tumor segmentation in CT images using a Dilated Dense Attention Unet (DDAUnet)
Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 +/- 0.20, a mean surface distance of 5.4 +/- 20.2mm and 95% Hausdorff distance of 14.7 +/- 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation.Biological, physical and clinical aspects of cancer treatment with ionising radiatio
DEEP LEARNING FOR VOLUMETRIC MEDICAL IMAGE SEGMENTATION
Over the past few decades, medical imaging techniques, e.g., computed tomography (CT), positron emission tomography (PET), have been widely used to improve the state of diagnosis, prognosis, and treatment of diseases. However, reading medical images and making diagnosis or treatment planning require well-trained medical specialists, which is labor-intensive, time-consuming, high-cost and error-prone. With the emerging of deep learning, doctors and researchers have started to benefit from medical image analysis in various applications, e.g., medical image registration, classification, detection and segmentation. Among these tasks, segmentation is the most common area of applying deep learning to medical imaging. How to improve medical diagnosis by advancing the segmentation in computer-aided diagnosis systems has become an active research topic.
In this dissertation, we will address this topic in following aspects. (i) We propose a 3D-based coarse-to-fine framework to effectively and efficiently tackle the challenges of limited amount of annotated 3D data and limited computational resources in the field of volumetric medical image segmentation. (ii) We extend the 3D coarse-to-fine to be multi-scale to early detect the small but clinically important pancreatic ductal adenocarcinoma (PDAC) tumors, and provide radiologists with interpretable abnormality locations by segmentation-for-classification. (iii) We extend the segmentation-for-classification to screen pancreatic neuroendocrine (PNETs) tumors by incorporating dual-phase information and dilated pancreatic duct that is regarded as the sign of high risk for pancreatic cancer. (iv) Going further, we investigate the mainstream methodology in the segmentation area and then explore the novel idea of AutoML in the medical imaging field to automatically search the neural network architectures tailoring for the segmentation task, which further advances the medical image segmentation field. (v) Moving forward beyond pancreatic tumors, we are the first to address the clinically critical task of detecting, identifying and characterizing suspicious cancer metastasized lymph nodes (LNs) by proposing a 3D distance stratification strategy to simulate and simplify the high-level reasoning protocols conducted by radiation oncologists in a divide-and-conquer manner. (vi) The 3D distance stratification strategy is upgraded by our proposed multi-branch detection-by-segmentation, which further advances the finding, identifying and segmenting of metastasis-suspicious LNs
A Review on Advances in Intra-operative Imaging for Surgery and Therapy: Imagining the Operating Room of the Future
none4openZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria FrancescaZaffino, Paolo; Moccia, Sara; De Momi, Elena; Spadea, Maria Francesc
Advanced machine learning methods for oncological image analysis
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis
Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing
Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment.
Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created.
Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose.
Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity
Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing
Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment.
Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created.
Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose.
Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity
Deep-Learning-Based Automatic Segmentation of Head and Neck Organs for Radiation Therapy in Dogs
Purpose: This study was conducted to develop a deep learning-based automatic segmentation (DLBAS) model of head and neck organs for radiotherapy (RT) in dogs, and to evaluate the feasibility for delineating the RT planning. Materials and Methods: The segmentation indicated that there were potentially 15 organs at risk (OARs) in the head and neck of dogs. Post-contrast computed tomography (CT) was performed in 90 dogs. The training and validation sets comprised 80 CT data sets, including 20 test sets. The accuracy of the segmentation was assessed using both the Dice similarity coefficient (DSC) and the Hausdorff distance (HD), and by referencing the expert contours as the ground truth. An additional 10 clinical test sets with relatively large displacement or deformation of organs were selected for verification in cancer patients. To evaluate the applicability in cancer patients, and the impact of expert intervention, three methods-HA, DLBAS, and the readjustment of the predicted data obtained via the DLBAS of the clinical test sets (HA_DLBAS)-were compared. Results: The DLBAS model (in the 20 test sets) showed reliable DSC and HD values; it also had a short contouring time of ~3 s. The average (mean ± standard deviation) DSC (0.83 ± 0.04) and HD (2.71 ± 1.01 mm) values were similar to those of previous human studies. The DLBAS was highly accurate and had no large displacement of head and neck organs. However, the DLBAS in the 10 clinical test sets showed lower DSC (0.78 ± 0.11) and higher HD (4.30 ± 3.69 mm) values than those of the test sets. The HA_DLBAS was comparable to both the HA (DSC: 0.85 ± 0.06 and HD: 2.74 ± 1.18 mm) and DLBAS presented better comparison metrics and decreased statistical deviations (DSC: 0.94 ± 0.03 and HD: 2.30 ± 0.41 mm). In addition, the contouring time of HA_DLBAS (30 min) was less than that of HA (80 min). Conclusion: In conclusion, HA_DLBAS method and the proposed DLBAS was highly consistent and robust in its performance. Thus, DLBAS has great potential as a single or supportive tool to the key process in RT planning.ope
- …