126 research outputs found

    Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

    Get PDF
    Positron emission tomography (PET)-Computed tomography (CT) plays an important role in cancer management. As a multi-modal imaging technique it provides both functional and anatomical information of tumor spread. Such information improves cancer treatment in many ways. One important usage of PET-CT in cancer treatment is to facilitate radiotherapy planning, for the information it provides helps radiation oncologists to better target the tumor region. However, currently most tumor delineations in radiotherapy planning are performed by manual segmentation, which consumes a lot of time and work. Most computer-aided algorithms need a knowledgeable user to locate roughly the tumor area as a starting point. This is because, in PET-CT imaging, some tissues like heart and kidney may also exhibit a high level of activity similar to that of a tumor region. In order to address this issue, a novel co-segmentation method is proposed in this work to enhance the accuracy of tumor segmentation using PET-CT, and a localization algorithm is developed to differentiate and segment tumor regions from normal regions. On a combined dataset containing 29 patients with lung tumor, the combined method shows good segmentation results as well as good tumor recognition rate

    A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging

    Full text link

    Assessment of various strategies for 18F-FET PET-guided delineation of target volumes in high-grade glioma patients

    Get PDF
    Purpose: The purpose of the study is to assess the contribution of 18F-fluoro-ethyl-tyrosine (18F-FET) positron emission tomography (PET) in the delineation of gross tumor volume (GTV) in patients with high-grade gliomas compared with magnetic resonance imaging (MRI) alone. Materials and methods: The study population consisted of 18 patients with high-grade gliomas. Seven image segmentation techniques were used to delineate 18F-FET PET GTVs, and the results were compared to the manual MRI-derived GTV (GTVMRI). PET image segmentation techniques included manual delineation of contours (GTVman), a 2.5 standardized uptake value (SUV) cutoff (GTV2.5), a fixed threshold of 40% and 50% of the maximum signal intensity (GTV40% and GTV50%), signal-to-background ratio (SBR)-based adaptive thresholding (GTVSBR), gradient find (GTVGF), and region growing (GTVRG). Overlap analysis was also conducted to assess geographic mismatch between the GTVs delineated using the different techniques. Results: Contours defined using GTV2.5 failed to provide successful delineation technically in three patients (18% of cases) as SUVmax < 2.5 and clinically in 14 patients (78% of cases). Overall, the majority of GTVs defined on PET-based techniques were usually smaller than GTVMRI (67% of cases). Yet, PET detected frequently tumors that are not visible on MRI and added substantially tumor extension outside the GTVMRI in six patients (33% of cases). Conclusions: The selection of the most appropriate 18F-FET PET-based segmentation algorithm is crucial, since it impacts both the volume and shape of the resulting GTV. The 2.5 SUV isocontour and GF segmentation techniques performed poorly and should not be used for GTV delineation. With adequate setting, the SBR-based PET technique may add considerably to conventional MRI-guided GTV delineatio

    PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques

    Get PDF
    Historically, anatomical CT and MR images were used to delineate the gross tumour volumes (GTVs) for radiotherapy treatment planning. The capabilities offered by modern radiation therapy units and the widespread availability of combined PET/CT scanners stimulated the development of biological PET imaging-guided radiation therapy treatment planning with the aim to produce highly conformal radiation dose distribution to the tumour. One of the most difficult issues facing PET-based treatment planning is the accurate delineation of target regions from typical blurred and noisy functional images. The major problems encountered are image segmentation and imperfect system response function. Image segmentation is defined as the process of classifying the voxels of an image into a set of distinct classes. The difficulty in PET image segmentation is compounded by the low spatial resolution and high noise characteristics of PET images. Despite the difficulties and known limitations, several image segmentation approaches have been proposed and used in the clinical setting including thresholding, edge detection, region growing, clustering, stochastic models, deformable models, classifiers and several other approaches. A detailed description of the various approaches proposed in the literature is reviewed. Moreover, we also briefly discuss some important considerations and limitations of the widely used techniques to guide practitioners in the field of radiation oncology. The strategies followed for validation and comparative assessment of various PET segmentation approaches are described. Future opportunities and the current challenges facing the adoption of PET-guided delineation of target volumes and its role in basic and clinical research are also addresse

    IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION

    Get PDF
    Techniques for processing and analysing images and medical data have become the main’s translational applications and researches in clinical and pre-clinical environments. The advantages of these techniques are the improvement of diagnosis accuracy and the assessment of treatment response by means of quantitative biomarkers in an efficient way. In the era of the personalized medicine, an early and efficacy prediction of therapy response in patients is still a critical issue. In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high quality detailed images and excellent soft-tissue contrast, while Computerized Tomography (CT) images provides attenuation maps and very good hard-tissue contrast. In this context, Positron Emission Tomography (PET) is a non-invasive imaging technique which has the advantage, over morphological imaging techniques, of providing functional information about the patient’s disease. In the last few years, several criteria to assess therapy response in oncological patients have been proposed, ranging from anatomical to functional assessments. Changes in tumour size are not necessarily correlated with changes in tumour viability and outcome. In addition, morphological changes resulting from therapy occur slower than functional changes. Inclusion of PET images in radiotherapy protocols is desirable because it is predictive of treatment response and provides crucial information to accurately target the oncological lesion and to escalate the radiation dose without increasing normal tissue injury. For this reason, PET may be used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the nature of PET images (low spatial resolution, high noise and weak boundary), metabolic image processing is a critical task. The aim of this Ph.D thesis is to develope smart methodologies applied to the medical imaging field to analyse different kind of problematic related to medical images and data analysis, working closely to radiologist physicians. Various issues in clinical environment have been addressed and a certain amount of improvements has been produced in various fields, such as organs and tissues segmentation and classification to delineate tumors volume using meshing learning techniques to support medical decision. In particular, the following topics have been object of this study: • Technique for Crohn’s Disease Classification using Kernel Support Vector Machine Based; • Automatic Multi-Seed Detection For MR Breast Image Segmentation; • Tissue Classification in PET Oncological Studies; • KSVM-Based System for the Definition, Validation and Identification of the Incisinal Hernia Reccurence Risk Factors; • A smart and operator independent system to delineate tumours in Positron Emission Tomography scans; 3 • Active Contour Algorithm with Discriminant Analysis for Delineating Tumors in Positron Emission Tomography; • K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor Volumes; • Tissue Classification to Support Local Active Delineation of Brain Tumors; • A fully automatic system of Positron Emission Tomography Study segmentation. This work has been developed in collaboration with the medical staff and colleagues at the: • Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi (DIBIMED), University of Palermo • Cannizzaro Hospital of Catania • Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale delle Ricerche (CNR) of Cefalù • School of Electrical and Computer Engineering at Georgia Institute of Technology The proposed contributions have produced scientific publications in indexed computer science and medical journals and conferences. They are very useful in terms of PET and MRI image segmentation and may be used daily as a Medical Decision Support Systems to enhance the current methodology performed by healthcare operators in radiotherapy treatments. The future developments of this research concern the integration of data acquired by image analysis with the managing and processing of big data coming from a wide kind of heterogeneous sources

    Multi-Modality Automatic Lung Tumor Segmentation Method Using Deep Learning and Radiomics

    Get PDF
    Delineation of the tumor volume is the initial and fundamental step in the radiotherapy planning process. The current clinical practice of manual delineation is time-consuming and suffers from observer variability. This work seeks to develop an effective automatic framework to produce clinically usable lung tumor segmentations. First, to facilitate the development and validation of our methodology, an expansive database of planning CTs, diagnostic PETs, and manual tumor segmentations was curated, and an image registration and preprocessing pipeline was established. Then a deep learning neural network was constructed and optimized to utilize dual-modality PET and CT images for lung tumor segmentation. The feasibility of incorporating radiomics and other mechanisms such as a tumor volume-based stratification scheme for training/validation/testing were investigated to improve the segmentation performance. The proposed methodology was evaluated both quantitatively with similarity metrics and clinically with physician reviews. In addition, external validation with an independent database was also conducted. Our work addressed some of the major limitations that restricted clinical applicability of the existing approaches and produced automatic segmentations that were consistent with the manually contoured ground truth and were highly clinically-acceptable according to both the quantitative and clinical evaluations. Both novel approaches of implementing a tumor volume-based training/validation/ testing stratification strategy as well as incorporating voxel-wise radiomics feature images were shown to improve the segmentation performance. The results showed that the proposed method was effective and robust, producing automatic lung tumor segmentations that could potentially improve both the quality and consistency of manual tumor delineation

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Automatic Segmentation of Lung Carcinoma Using 3D Texture Features in 18-FDG PET/CT

    Get PDF

    Investigation of intra-tumour heterogeneity to identify texture features to characterise and quantify neoplastic lesions on imaging

    Get PDF
    The aim of this work was to further our knowledge of using imaging data to discover image derived biomarkers and other information about the imaged tumour. Using scans obtained from multiple centres to discover and validate the models has advanced earlier research and provided a platform for further larger centre prospective studies. This work consists of two major studies which are describe separately: STUDY 1: NSCLC Purpose The aim of this multi-center study was to discover and validate radiomics classifiers as image-derived biomarkers for risk stratification of non-small-cell lung cancer (NSCLC). Patients and methods Pre-therapy PET scans from 358 Stage I–III NSCLC patients scheduled for radical radiotherapy/chemoradiotherapy acquired between October 2008 and December 2013 were included in this seven-institution study. Using a semiautomatic threshold method to segment the primary tumors, radiomics predictive classifiers were derived from a training set of 133 scans using TexLAB v2. Least absolute shrinkage and selection operator (LASSO) regression analysis allowed data dimension reduction and radiomics feature vector (FV) discovery. Multivariable analysis was performed to establish the relationship between FV, stage and overall survival (OS). Performance of the optimal FV was tested in an independent validation set of 204 patients, and a further independent set of 21 (TESTI) patients. Results Of 358 patients, 249 died within the follow-up period [median 22 (range 0–85) months]. From each primary tumor, 665 three-dimensional radiomics features from each of seven gray levels were extracted. The most predictive feature vector discovered (FVX) was independent of known prognostic factors, such as stage and tumor volume, and of interest to multi-center studies, invariant to the type of PET/CT manufacturer. Using the median cut-off, FVX predicted a 14-month survival difference in the validation cohort (N = 204, p = 0.00465; HR = 1.61, 95% CI 1.16–2.24). In the TESTI cohort, a smaller cohort that presented with unusually poor survival of stage I cancers, FVX correctly indicated a lack of survival difference (N = 21, p = 0.501). In contrast to the radiomics classifier, clinically routine PET variables including SUVmax, SUVmean and SUVpeak lacked any prognostic information. Conclusion PET-based radiomics classifiers derived from routine pre-treatment imaging possess intrinsic prognostic information for risk stratification of NSCLC patients to radiotherapy/chemo-radiotherapy. STUDY 2: Ovarian Cancer Purpose The 5-year survival of epithelial ovarian cancer is approximately 35-40%, prompting the need to develop additional methods such as biomarkers for personalised treatment. Patient and Methods 657 texture features were extracted from the CT scans of 364 untreated EOC patients. A 4-texture feature ‘Radiomic Prognostic Vector (RPV)’ was developed using machine learning methods on the training set. Results The RPV was able to identify the 5% of patients with the worst prognosis, significantly improving established prognostic methods and was further validated in two independent, multi-centre cohorts. In addition, the genetic, transcriptomic and proteomic analysis from two independent datasets demonstrated that stromal and DNA damage response pathways are activated in RPV-stratified tumours. Conclusion RPV could be used to guide personalised therapy of EOC. Overall, the two large datasets of different imaging modalities have increased our knowledge of texture analysis, improving the models currently available and provided us with more areas with which to implement these tools in the clinical setting.Open Acces

    Techniques and software tool for 3D multimodality medical image segmentation

    Get PDF
    The era of noninvasive diagnostic radiology and image-guided radiotherapy has witnessed burgeoning interest in applying different imaging modalities to stage and localize complex diseases such as atherosclerosis or cancer. It has been observed that using complementary information from multimodality images often significantly improves the robustness and accuracy of target volume definitions in radiotherapy treatment of cancer. In this work, we present techniques and an interactive software tool to support this new framework for 3D multimodality medical image segmentation. To demonstrate this methodology, we have designed and developed a dedicated open source software tool for multimodality image analysis MIASYS. The software tool aims to provide a needed solution for 3D image segmentation by integrating automatic algorithms, manual contouring methods, image preprocessing filters, post-processing procedures, user interactive features and evaluation metrics. The presented methods and the accompanying software tool have been successfully evaluated for different radiation therapy and diagnostic radiology applications
    • …
    corecore