157 research outputs found

    Unsupervised supervoxel-based lung tumor segmentation across patient scans in hybrid PET/MRI

    Get PDF
    Tumor segmentation is a crucial but difficult task in treatment planning and follow-up of cancerous patients. The challenge of automating the tumor segmentation has recently received a lot of attention, but the potential of utilizing hybrid positron emission tomography (PET)/magnetic resonance imaging (MRI), a novel and promising imaging modality in oncology, is still under-explored. Recent approaches have either relied on manual user input and/or performed the segmentation patient-by-patient, whereas a fully unsupervised segmentation framework that exploits the available information from all patients is still lacking. We present an unsupervised across-patients supervoxel-based clustering framework for lung tumor segmentation in hybrid PET/MRI. The method consists of two steps: First, each patient is represented by a set of PET/ MRI supervoxel-features. Then the data points from all patients are transformed and clustered on a population level into tumor and non-tumor supervoxels. The proposed framework is tested on the scans of 18 non-small cell lung cancer patients with a total of 19 tumors and evaluated with respect to manual delineations provided by clinicians. Experiments study the performance of several commonly used clustering algorithms within the framework and provide analysis of (i) the effect of tumor size, (ii) the segmentation errors, (iii) the benefit of across-patient clustering, and (iv) the noise robustness. The proposed framework detected 15 out of 19 tumors in an unsupervised manner. Moreover, performance increased considerably by segmenting across patients, with the mean dice score increasing from 0.169 ± 0.295 (patient-by-patient) to 0.470 ± 0.308 (across-patients). Results demonstrate that both spectral clustering and Manhattan hierarchical clustering have the potential to segment tumors in PET/MRI with a low number of missed tumors and a low number of false-positives, but that spectral clustering seems to be more robust to noise

    Leveraging Supervoxels for Medical Image Volume Segmentation With Limited Supervision

    Get PDF
    The majority of existing methods for machine learning-based medical image segmentation are supervised models that require large amounts of fully annotated images. These types of datasets are typically not available in the medical domain and are difficult and expensive to generate. A wide-spread use of machine learning based models for medical image segmentation therefore requires the development of data-efficient algorithms that only require limited supervision. To address these challenges, this thesis presents new machine learning methodology for unsupervised lung tumor segmentation and few-shot learning based organ segmentation. When working in the limited supervision paradigm, exploiting the available information in the data is key. The methodology developed in this thesis leverages automatically generated supervoxels in various ways to exploit the structural information in the images. The work on unsupervised tumor segmentation explores the opportunity of performing clustering on a population-level in order to provide the algorithm with as much information as possible. To facilitate this population-level across-patient clustering, supervoxel representations are exploited to reduce the number of samples, and thereby the computational cost. In the work on few-shot learning-based organ segmentation, supervoxels are used to generate pseudo-labels for self-supervised training. Further, to obtain a model that is robust to the typically large and inhomogeneous background class, a novel anomaly detection-inspired classifier is proposed to ease the modelling of the background. To encourage the resulting segmentation maps to respect edges defined in the input space, a supervoxel-informed feature refinement module is proposed to refine the embedded feature vectors during inference. Finally, to improve trustworthiness, an architecture-agnostic mechanism to estimate model uncertainty in few-shot segmentation is developed. Results demonstrate that supervoxels are versatile tools for leveraging structural information in medical data when training segmentation models with limited supervision

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Functional and structural MRI image analysis for brain glial tumors treatment

    Get PDF
    This Ph.D Thesis is the outcome of a close collaboration between the Center for Research in Image Analysis and Medical Informatics (CRAIIM) of the Insubria University and the Operative Unit of Neurosurgery, Neuroradiology and Health Physics of the University Hospital ”Circolo Fondazione Macchi”, Varese. The project aim is to investigate new methodologies by means of whose, develop an integrated framework able to enhance the use of Magnetic Resonance Images, in order to support clinical experts in the treatment of patients with brain Glial tumor. Both the most common uses of MRI technology for non-invasive brain inspection were analyzed. From the Functional point of view, the goal has been to provide tools for an objective reliable and non-presumptive assessment of the brain’s areas locations, to preserve them as much as possible at surgery. From the Structural point of view, methodologies for fully automatic brain segmentation and recognition of the tumoral areas, for evaluating the tumor volume, the spatial distribution and to be able to infer correlation with other clinical data or trace growth trend, have been studied. Each of the proposed methods has been thoroughly assessed both qualitatively and quantitatively. All the Medical Imaging and Pattern Recognition algorithmic solutions studied for this Ph.D. Thesis have been integrated in GliCInE: Glioma Computerized Inspection Environment, which is a MATLAB prototype of an integrated analysis environment that offers, in addition to all the functionality specifically described in this Thesis, a set of tools needed to manage Functional and Structural Magnetic Resonance Volumes and ancillary data related to the acquisition and the patient

    Functional and structural MRI image analysis for brain glial tumors treatment

    Get PDF
    Cotutela con il Dipartimento di Biotecnologie e Scienze della Vita, Universiità degli Studi dell'Insubria.openThis Ph.D Thesis is the outcome of a close collaboration between the Center for Research in Image Analysis and Medical Informatics (CRAIIM) of the Insubria University and the Operative Unit of Neurosurgery, Neuroradiology and Health Physics of the University Hospital ”Circolo Fondazione Macchi”, Varese. The project aim is to investigate new methodologies by means of whose, develop an integrated framework able to enhance the use of Magnetic Resonance Images, in order to support clinical experts in the treatment of patients with brain Glial tumor. Both the most common uses of MRI technology for non-invasive brain inspection were analyzed. From the Functional point of view, the goal has been to provide tools for an objective reliable and non-presumptive assessment of the brain’s areas locations, to preserve them as much as possible at surgery. From the Structural point of view, methodologies for fully automatic brain segmentation and recognition of the tumoral areas, for evaluating the tumor volume, the spatial distribution and to be able to infer correlation with other clinical data or trace growth trend, have been studied. Each of the proposed methods has been thoroughly assessed both qualitatively and quantitatively. All the Medical Imaging and Pattern Recognition algorithmic solutions studied for this Ph.D. Thesis have been integrated in GliCInE: Glioma Computerized Inspection Environment, which is a MATLAB prototype of an integrated analysis environment that offers, in addition to all the functionality specifically described in this Thesis, a set of tools needed to manage Functional and Structural Magnetic Resonance Volumes and ancillary data related to the acquisition and the patient.openInformaticaPedoia, ValentinaPedoia, Valentin

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques

    Get PDF
    Historically, anatomical CT and MR images were used to delineate the gross tumour volumes (GTVs) for radiotherapy treatment planning. The capabilities offered by modern radiation therapy units and the widespread availability of combined PET/CT scanners stimulated the development of biological PET imaging-guided radiation therapy treatment planning with the aim to produce highly conformal radiation dose distribution to the tumour. One of the most difficult issues facing PET-based treatment planning is the accurate delineation of target regions from typical blurred and noisy functional images. The major problems encountered are image segmentation and imperfect system response function. Image segmentation is defined as the process of classifying the voxels of an image into a set of distinct classes. The difficulty in PET image segmentation is compounded by the low spatial resolution and high noise characteristics of PET images. Despite the difficulties and known limitations, several image segmentation approaches have been proposed and used in the clinical setting including thresholding, edge detection, region growing, clustering, stochastic models, deformable models, classifiers and several other approaches. A detailed description of the various approaches proposed in the literature is reviewed. Moreover, we also briefly discuss some important considerations and limitations of the widely used techniques to guide practitioners in the field of radiation oncology. The strategies followed for validation and comparative assessment of various PET segmentation approaches are described. Future opportunities and the current challenges facing the adoption of PET-guided delineation of target volumes and its role in basic and clinical research are also addresse

    Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images

    Get PDF
    The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems
    corecore