550 research outputs found

    Segmentation, tracking, and kinematics of lung parenchyma and lung tumors from 4D CT with application to radiation treatment planning.

    Get PDF
    This thesis is concerned with development of techniques for efficient computerized analysis of 4-D CT data. The goal is to have a highly automated approach to segmentation of the lung boundary and lung nodules inside the lung. The determination of exact lung tumor location over space and time by image segmentation is an essential step to track thoracic malignancies. Accurate image segmentation helps clinical experts examine the anatomy and structure and determine the disease progress. Since 4-D CT provides structural and anatomical information during tidal breathing, we use the same data to also measure mechanical properties related to deformation of the lung tissue including Jacobian and strain at high resolutions and as a function of time. Radiation Treatment of patients with lung cancer can benefit from knowledge of these measures of regional ventilation. Graph-cuts techniques have been popular for image segmentation since they are able to treat highly textured data via robust global optimization, avoiding local minima in graph based optimization. The graph-cuts methods have been used to extract globally optimal boundaries from images by s/t cut, with energy function based on model-specific visual cues, and useful topological constraints. The method makes N-dimensional globally optimal segmentation possible with good computational efficiency. Even though the graph-cuts method can extract objects where there is a clear intensity difference, segmentation of organs or tumors pose a challenge. For organ segmentation, many segmentation methods using a shape prior have been proposed. However, in the case of lung tumors, the shape varies from patient to patient, and with location. In this thesis, we use a shape prior for tumors through a training step and PCA analysis based on the Active Shape Model (ASM). The method has been tested on real patient data from the Brown Cancer Center at the University of Louisville. We performed temporal B-spline deformable registration of the 4-D CT data - this yielded 3-D deformation fields between successive respiratory phases from which measures of regional lung function were determined. During the respiratory cycle, the lung volume changes and five different lobes of the lung (two in the left and three in the right lung) show different deformation yielding different strain and Jacobian maps. In this thesis, we determine the regional lung mechanics in the Lagrangian frame of reference through different respiratory phases, for example, Phase10 to 20, Phase10 to 30, Phase10 to 40, and Phase10 to 50. Single photon emission computed tomography (SPECT) lung imaging using radioactive tracers with SPECT ventilation and SPECT perfusion imaging also provides functional information. As part of an IRB-approved study therefore, we registered the max-inhale CT volume to both VSPECT and QSPECT data sets using the Demon\u27s non-rigid registration algorithm in patient subjects. Subsequently, statistical correlation between CT ventilation images (Jacobian and strain values), with both VSPECT and QSPECT was undertaken. Through statistical analysis with the Spearman\u27s rank correlation coefficient, we found that Jacobian values have the highest correlation with both VSPECT and QSPECT

    Co-Segmentation Methods for Improving Tumor Target Delineation in PET-CT Images

    Get PDF
    Positron emission tomography (PET)-Computed tomography (CT) plays an important role in cancer management. As a multi-modal imaging technique it provides both functional and anatomical information of tumor spread. Such information improves cancer treatment in many ways. One important usage of PET-CT in cancer treatment is to facilitate radiotherapy planning, for the information it provides helps radiation oncologists to better target the tumor region. However, currently most tumor delineations in radiotherapy planning are performed by manual segmentation, which consumes a lot of time and work. Most computer-aided algorithms need a knowledgeable user to locate roughly the tumor area as a starting point. This is because, in PET-CT imaging, some tissues like heart and kidney may also exhibit a high level of activity similar to that of a tumor region. In order to address this issue, a novel co-segmentation method is proposed in this work to enhance the accuracy of tumor segmentation using PET-CT, and a localization algorithm is developed to differentiate and segment tumor regions from normal regions. On a combined dataset containing 29 patients with lung tumor, the combined method shows good segmentation results as well as good tumor recognition rate

    A Semi-Automated Approach to Medical Image Segmentation using Conditional Random Field Inference

    Full text link
    Medical image segmentation plays a crucial role in delivering effective patient care in various diagnostic and treatment modalities. Manual delineation of target volumes and all critical structures is a very tedious and highly time-consuming process and introduce uncertainties of treatment outcomes of patients. Fully automatic methods holds great promise for reducing cost and time, while at the same time improving accuracy and eliminating expert variability, yet there are still great challenges. Legally and ethically, human oversight must be integrated with ”smart tools” favoring a semi-automatic technique which can leverage the best aspects of both human and computer. In this work we show that we can formulate a semi-automatic framework for the segmentation problem by formulating it as an energy minimization problem in Conditional Random Field (CRF). We show that human input can be used as adaptive training data to condition a probabilistic boundary term modeled for the heterogeneous boundary characteristics of anatomical structures. We demonstrated that our method can effortlessly adapt to multiple structures and image modalities using a single CRF framework and tools to learn probabilistic terms interactively. To tackle a more difficult multi-class segmentation problem, we developed a new ensemble one-vs-rest graph cut algorithm. Each graph in the ensemble performs a simple and efficient bi-class (a target class vs the rest of the classes) segmentation. The final segmentation is obtained by majority vote. Our algorithm is both faster and more accurate when compared with the prior multi-class method which iteratively swaps classes. In this Thesis, we also include novel volumetric segmentation algorithms which employ deep learning and indicate how to synthesize our CRF framework with convolutional neural networks (CNN). This would allow incorporating user guidance into CNN based deep learning for this task. We think a deep learning based method interactively guided by human expert is the ideal solution for medical image segmentation

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Biomedical Image Processing and Classification

    Get PDF
    Biomedical image processing is an interdisciplinary field involving a variety of disciplines, e.g., electronics, computer science, physics, mathematics, physiology, and medicine. Several imaging techniques have been developed, providing many approaches to the study of the human body. Biomedical image processing is finding an increasing number of important applications in, for example, the study of the internal structure or function of an organ and the diagnosis or treatment of a disease. If associated with classification methods, it can support the development of computer-aided diagnosis (CAD) systems, which could help medical doctors in refining their clinical picture

    Enhancing Semantic Segmentation: Design and Analysis of Improved U-Net Based Deep Convolutional Neural Networks

    Get PDF
    In this research, we provide a state-of-the-art method for semantic segmentation that makes use of a modified version of the U-Net architecture, which is itself based on deep convolutional neural networks (CNNs). This research delves into the ins and outs of this cutting-edge approach to semantic segmentation in an effort to boost its precision and productivity. To perform semantic segmentation, a crucial operation in computer vision, each pixel in an image must be assigned to one of many predefined item classes. The proposed Improved U-Net architecture makes use of deep CNNs to efficiently capture complex spatial characteristics while preserving associated context. The study illustrates the efficacy of the Improved U-Net in a variety of real-world circumstances through thorough experimentation and assessment. Intricate feature extraction, down-sampling, and up-sampling are all part of the network's design in order to produce high-quality segmentation results. The study demonstrates comparative evaluations against classic U-Net and other state-of-the-art models and emphasizes the significance of hyperparameter fine-tuning. The suggested architecture shows excellent performance in terms of accuracy and generalization, demonstrating its promise for a variety of applications. Finally, the problem of semantic segmentation is addressed in a novel way. The experimental findings validate the relevance of the architecture's design decisions and demonstrate its potential to boost computer vision by enhancing segmentation precision and efficiency

    The optimal connection model for blood vessels segmentation and the MEA-Net

    Full text link
    Vascular diseases have long been regarded as a significant health concern. Accurately detecting the location, shape, and afflicted regions of blood vessels from a diverse range of medical images has proven to be a major challenge. Obtaining blood vessels that retain their correct topological structures is currently a crucial research issue. Numerous efforts have sought to reinforce neural networks' learning of vascular geometric features, including measures to ensure the correct topological structure of the segmentation result's vessel centerline. Typically, these methods extract topological features from the network's segmentation result and then apply regular constraints to reinforce the accuracy of critical components and the overall topological structure. However, as blood vessels are three-dimensional structures, it is essential to achieve complete local vessel segmentation, which necessitates enhancing the segmentation of vessel boundaries. Furthermore, current methods are limited to handling 2D blood vessel fragmentation cases. Our proposed boundary attention module directly extracts boundary voxels from the network's segmentation result. Additionally, we have established an optimal connection model based on minimal surfaces to determine the connection order between blood vessels. Our method achieves state-of-the-art performance in 3D multi-class vascular segmentation tasks, as evidenced by the high values of Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) metrics. Furthermore, our approach improves the Betti error, LR error, and BR error indicators of vessel richness and structural integrity by more than 10% compared to other methods, and effectively addresses vessel fragmentation and yields blood vessels with a more precise topological structure.Comment: 19 page

    Deep learning for image-based liver analysis — A comprehensive review focusing on malignant lesions

    Get PDF
    Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.publishedVersio
    • …
    corecore