577 research outputs found

    Automated analysis of small animal PET studies through deformable registration to an atlas

    Get PDF
    Purpose: This work aims to develop a methodology for automated atlas-guided analysis of small animal positron emission tomography (PET) data through deformable registration to an anatomical mouse model. Methods: A non-rigid registration technique is used to put into correspondence relevant anatomical regions of rodent CT images from combined PET/CT studies to corresponding CT images of the Digimouse anatomical mouse model. The latter provides a pre-segmented atlas consisting of 21 anatomical regions suitable for automated quantitative analysis. Image registration is performed using a package based on the Insight Toolkit allowing the implementation of various image registration algorithms. The optimal parameters obtained for deformable registration were applied to simulated and experimental mouse PET/CT studies. The accuracy of the image registration procedure was assessed by segmenting mouse CT images into seven regions: brain, lungs, heart, kidneys, bladder, skeleton and the rest of the body. This was accomplished prior to image registration using a semi-automated algorithm. Each mouse segmentation was transformed using the parameters obtained during CT to CT image registration. The resulting segmentation was compared with the original Digimouse atlas to quantify image registration accuracy using established metrics such as the Dice coefficient and Hausdorff distance. PET images were then transformed using the same technique and automated quantitative analysis of tracer uptake performed. Results: The Dice coefficient and Hausdorff distance show fair to excellent agreement and a mean registration mismatch distance of about 6mm. The results demonstrate good quantification accuracy in most of the regions, especially the brain, but not in the bladder, as expected. Normalized mean activity estimates were preserved between the reference and automated quantification techniques with relative errors below 10% in most of the organs considered. Conclusion: The proposed automated quantification technique is reliable, robust and suitable for fast quantification of preclinical PET data in large serial studie

    Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation

    Get PDF
    The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods

    Statistical Shape Modelling and Segmentation of the Respiratory Airway

    Get PDF
    The human respiratory airway consists of the upper (nasal cavity, pharynx) and the lower (trachea, bronchi) respiratory tracts. Accurate segmentation of these two airway tracts can lead to better diagnosis and interpretation of airway-specific diseases, and lead to improvement in the localization of abnormal metabolic or pathological sites found within and/or surrounding the respiratory regions. Due to the complexity and the variability displayed in the anatomical structure of the upper respiratory airway along with the challenges in distinguishing the nasal cavity from non-respiratory regions such as the paranasal sinuses, it is difficult for existing algorithms to accurately segment the upper airway without manual intervention. This thesis presents an implicit non-parametric framework for constructing a statistical shape model (SSM) of the upper and lower respiratory tract, capable of distinct shape generation and be adapted for segmentation. An SSM of the nasal cavity was successfully constructed using 50 nasal CT scans. The performance of the SSM was evaluated for compactness, specificity and generality. An averaged distance error of 1.47 mm was measured for the generality assessment. The constructed SSM was further adapted with a modified locally constrained random walk algorithm to segment the nasal cavity. The proposed algorithm was evaluated on 30 CT images and outperformed comparative state-of-the-art and conventional algorithms. For the lower airway, a separate algorithm was proposed to automatically segment the trachea and bronchi, and was designed to tolerate the image characteristics inherent in low-contrast CT images. The algorithm was evaluated on 20 clinical low-contrast CT from PET-CT patient studies and demonstrated better performance (87.1±2.8 DSC and distance error of 0.37±0.08 mm) in segmentation results against comparative state-of-the-art algorithms

    Evaluating and Improving 4D-CT Image Segmentation for Lung Cancer Radiotherapy

    Get PDF
    Lung cancer is a high-incidence disease with low survival despite surgical advances and concurrent chemo-radiotherapy strategies. Image-guided radiotherapy provides for treatment measures, however, significant challenges exist for imaging, treatment planning, and delivery of radiation due to the influence of respiratory motion. 4D-CT imaging is capable of improving image quality of thoracic target volumes influenced by respiratory motion. 4D-CT-based treatment planning strategies requires highly accurate anatomical segmentation of tumour volumes for radiotherapy treatment plan optimization. Variable segmentation of tumour volumes significantly contributes to uncertainty in radiotherapy planning due to a lack of knowledge regarding the exact shape of the lesion and difficulty in quantifying variability. As image-segmentation is one of the earliest tasks in the radiotherapy process, inherent geometric uncertainties affect subsequent stages, potentially jeopardizing patient outcomes. Thus, this work assesses and suggests strategies for mitigation of segmentation-related geometric uncertainties in 4D-CT-based lung cancer radiotherapy at pre- and post-treatment planning stages

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore