580 research outputs found

    Regmentation: A New View of Image Segmentation and Registration

    Get PDF
    Image segmentation and registration have been the two major areas of research in the medical imaging community for decades and still are. In the context of radiation oncology, segmentation and registration methods are widely used for target structure definition such as prostate or head and neck lymph node areas. In the past two years, 45% of all articles published in the most important medical imaging journals and conferences have presented either segmentation or registration methods. In the literature, both categories are treated rather separately even though they have much in common. Registration techniques are used to solve segmentation tasks (e.g. atlas based methods) and vice versa (e.g. segmentation of structures used in a landmark based registration). This article reviews the literature on image segmentation methods by introducing a novel taxonomy based on the amount of shape knowledge being incorporated in the segmentation process. Based on that, we argue that all global shape prior segmentation methods are identical to image registration methods and that such methods thus cannot be characterized as either image segmentation or registration methods. Therefore we propose a new class of methods that are able solve both segmentation and registration tasks. We call it regmentation. Quantified on a survey of the current state of the art medical imaging literature, it turns out that 25% of the methods are pure registration methods, 46% are pure segmentation methods and 29% are regmentation methods. The new view on image segmentation and registration provides a consistent taxonomy in this context and emphasizes the importance of regmentation in current medical image processing research and radiation oncology image-guided applications

    Deformable models for adaptive radiotherapy planning

    Get PDF
    Radiotherapy is the most widely used treatment for cancer, with 4 out of 10 cancer patients receiving radiotherapy as part of their treatment. The delineation of gross tumour volume (GTV) is crucial in the treatment of radiotherapy. An automatic contouring system would be beneficial in radiotherapy planning in order to generate objective, accurate and reproducible GTV contours. Image guided radiotherapy (IGRT) acquires patient images just before treatment delivery to allow any necessary positional correction. Consequently, real-time contouring system provides an opportunity to adopt radiotherapy on the treatment day. In this thesis, freely deformable models (FDM) and shape constrained deformable models (SCDMs) were used to automatically delineate the GTV for brain cancer and prostate cancer. Level set method (LSM) is a typical FDM which was used to contour glioma on brain MRI. A series of low level image segmentation methodologies are cascaded to form a case-wise fully automatic initialisation pipeline for the level set function. Dice similarity coefficients (DSCs) were used to evaluate the contours. Results shown a good agreement between clinical contours and LSM contours, in 93% of cases the DSCs was found to be between 60% and 80%. The second significant contribution is a novel development to the active shape model (ASM), a profile feature was selected from pre-computed texture features by minimising the Mahalanobis distance (MD) to obtain the most distinct feature for each landmark, instead of conventional image intensity. A new group-wise registration scheme was applied to solve the correspondence definition within the training data. This ASM model was used to delineated prostate GTV on CT. DSCs for this case was found between 0.75 and 0.91 with the mean DSC 0.81. The last contribution is a fully automatic active appearance model (AAM) which captures image appearance near the GTV boundary. The image appearance of inner GTV was discarded to spare the potential disruption caused by brachytherapy seeds or gold markers. This model outperforms conventional AAM at the prostate base and apex region by involving surround organs. The overall mean DSC for this case is 0.85

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≄0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≄0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity

    Whole heart segmentation from CT images using 3D U-Net architecture

    Get PDF
    Recent studies have demonstrated the importance of neural networks in medical image processing and analysis. However, their great efficiency in segmentation tasks is highly dependent on the amount of training data. When these networks are used on small datasets, the process of data augmentation can be very significant. We propose a convolutional neural network approach for the whole heart segmentation which is based upon the 3D U-Net architecture and incorporates principle component analysis as an additional data augmentation technique. The network is trained end-to-end i.e. no pre-trained network is required. Evaluation of the proposed approach is performed on 20 3D CT images from MICCAI 2017 Multi-Modality Whole Heart Segmentation Challenge dataset, divided into 15 training and 5 validation images. Final segmentation results show a high Dice coefficient overlap to ground truth, indicating that the proposed approach is competitive to state-of-the-art. Additionally, we provide the discussion of the influence of different learning rates on the final segmentation results

    Semiautomated 3D liver segmentation using computed tomography and magnetic resonance imaging

    Get PDF
    Le foie est un organe vital ayant une capacitĂ© de rĂ©gĂ©nĂ©ration exceptionnelle et un rĂŽle crucial dans le fonctionnement de l’organisme. L’évaluation du volume du foie est un outil important pouvant ĂȘtre utilisĂ© comme marqueur biologique de sĂ©vĂ©ritĂ© de maladies hĂ©patiques. La volumĂ©trie du foie est indiquĂ©e avant les hĂ©patectomies majeures, l’embolisation de la veine porte et la transplantation. La mĂ©thode la plus rĂ©pandue sur la base d'examens de tomodensitomĂ©trie (TDM) et d'imagerie par rĂ©sonance magnĂ©tique (IRM) consiste Ă  dĂ©limiter le contour du foie sur plusieurs coupes consĂ©cutives, un processus appelĂ© la «segmentation». Nous prĂ©sentons la conception et la stratĂ©gie de validation pour une mĂ©thode de segmentation semi-automatisĂ©e dĂ©veloppĂ©e Ă  notre institution. Notre mĂ©thode reprĂ©sente une approche basĂ©e sur un modĂšle utilisant l’interpolation variationnelle de forme ainsi que l’optimisation de maillages de Laplace. La mĂ©thode a Ă©tĂ© conçue afin d’ĂȘtre compatible avec la TDM ainsi que l' IRM. Nous avons Ă©valuĂ© la rĂ©pĂ©tabilitĂ©, la fiabilitĂ© ainsi que l’efficacitĂ© de notre mĂ©thode semi-automatisĂ©e de segmentation avec deux Ă©tudes transversales conçues rĂ©trospectivement. Les rĂ©sultats de nos Ă©tudes de validation suggĂšrent que la mĂ©thode de segmentation confĂšre une fiabilitĂ© et rĂ©pĂ©tabilitĂ© comparables Ă  la segmentation manuelle. De plus, cette mĂ©thode diminue de façon significative le temps d’interaction, la rendant ainsi adaptĂ©e Ă  la pratique clinique courante. D’autres Ă©tudes pourraient incorporer la volumĂ©trie afin de dĂ©terminer des marqueurs biologiques de maladie hĂ©patique basĂ©s sur le volume tels que la prĂ©sence de stĂ©atose, de fer, ou encore la mesure de fibrose par unitĂ© de volume.The liver is a vital abdominal organ known for its remarkable regenerative capacity and fundamental role in organism viability. Assessment of liver volume is an important tool which physicians use as a biomarker of disease severity. Liver volumetry is clinically indicated prior to major hepatectomy, portal vein embolization and transplantation. The most popular method to determine liver volume from computed tomography (CT) and magnetic resonance imaging (MRI) examinations involves contouring the liver on consecutive imaging slices, a process called “segmentation”. Segmentation can be performed either manually or in an automated fashion. We present the design concept and validation strategy for an innovative semiautomated liver segmentation method developed at our institution. Our method represents a model-based approach using variational shape interpolation and Laplacian mesh optimization techniques. It is independent of training data, requires limited user interactions and is robust to a variety of pathological cases. Further, it was designed for compatibility with both CT and MRI examinations. We evaluated the repeatability, agreement and efficiency of our semiautomated method in two retrospective cross-sectional studies. The results of our validation studies suggest that semiautomated liver segmentation can provide strong agreement and repeatability when compared to manual segmentation. Further, segmentation automation significantly shortens interaction time, thus making it suitable for daily clinical practice. Future studies may incorporate liver volumetry to determine volume-averaged biomarkers of liver disease, such as such as fat, iron or fibrosis measurements per unit volume. Segmental volumetry could also be assessed based on subsegmentation of vascular anatomy

    Segmentation Of Intracranial Structures From Noncontrast Ct Images With Deep Learning

    Get PDF
    Presented in this work is an investigation of the application of artificially intelligent algorithms, namely deep learning, to generate segmentations for the application in functional avoidance radiotherapy treatment planning. Specific applications of deep learning for functional avoidance include generating hippocampus segmentations from computed tomography (CT) images and generating synthetic pulmonary perfusion images from four-dimensional CT (4DCT).A single institution dataset of 390 patients treated with Gamma Knife stereotactic radiosurgery was created. From these patients, the hippocampus was manually segmented on the high-resolution MR image and used for the development of the data processing methodology and model testing. It was determined that an attention-gated 3D residual network performed the best, with 80.2% of contours meeting the clinical trial acceptability criteria. After having determined the highest performing model architecture, the model was tested on data from the RTOG-0933 Phase II multi-institutional clinical trial for hippocampal avoidance whole brain radiotherapy. From the RTOG-0933 data, an institutional observer (IO) generated contours to compare the deep learning style and the style of the physicians participating in the phase II trial. The deep learning model performance was compared with contour comparison and radiotherapy treatment planning. Results showed that the deep learning contours generated plans comparable to the IO style, but differed significantly from the phase II contours, indicating further investigation is required before this technology can be apply clinically. Additionally, motivated by the observed deviation in contouring styles of the trial’s participating treating physicians, the utility of applying deep learning as a first-pass quality assurance measure was investigated. To simulate a central review, the IO contours were compared to the treating physician contours in attempt to identify unacceptable deviations. The deep learning model was found to have an AUC of 0.80 for left, 0.79 for right hippocampus, thus indicating the potential applications of deep learning as a first-pass quality assurance tool. The methods developed during the hippocampal segmentation task were then translated to the generation of synthetic pulmonary perfusion imaging for use in functional lung avoidance radiotherapy. A clinical data set of 58 pre- and post-radiotherapy SPECT perfusion studies (32 patients) with contemporaneous 4DCT studies were collected. From the data set, 50 studies were used to train a 3D-residual network, with a five-fold validation used to select the highest performing model instances (N=5). The highest performing instances were tested on a 5 patient (8 study) hold-out test set. From these predictions, 50th percentile contours of well-perfused lung were generated and compared to contours from the clinical SPECT perfusion images. On the test set the Spearman correlation coefficient was strong (0.70, IQR: 0.61-0.76) and the functional avoidance contours agreed well Dice of 0.803 (IQR: 0.750-0.810), average surface distance of 5.92 mm (IQR: 5.68-7.55) mm. This study indicates the potential applications of deep learning for the generation of synthetic pulmonary perfusion images but requires an expanded dataset for additional model testing

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    • 

    corecore