2,568 research outputs found

    State of the art: iterative CT reconstruction techniques

    Get PDF
    Owing to recent advances in computing power, iterative reconstruction (IR) algorithms have become a clinically viable option in computed tomographic (CT) imaging. Substantial evidence is accumulating about the advantages of IR algorithms over established analytical methods, such as filtered back projection. IR improves image quality through cyclic image processing. Although all available solutions share the common mechanism of artifact reduction and/or potential for radiation dose savings, chiefly due to image noise suppression, the magnitude of these effects depends on the specific IR algorithm. In the first section of this contribution, the technical bases of IR are briefly reviewed and the currently available algorithms released by the major CT manufacturers are described. In the second part, the current status of their clinical implementation is surveyed. Regardless of the applied IR algorithm, the available evidence attests to the substantial potential of IR algorithms for overcoming traditional limitations in CT imaging

    Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography

    Get PDF
    PURPOSE: Classic encoder-decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. METHODS: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and 95% Hausdorff distance (95HD) between the reference standard and the automated segmentation. RESULTS: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. CONCLUSIONS: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    Medical Image Analytics (Radiomics) with Machine/Deeping Learning for Outcome Modeling in Radiation Oncology

    Full text link
    Image-based quantitative analysis (radiomics) has gained great attention recently. Radiomics possesses promising potentials to be applied in the clinical practice of radiotherapy and to provide personalized healthcare for cancer patients. However, there are several challenges along the way that this thesis will attempt to address. Specifically, this thesis focuses on the investigation of repeatability and reproducibility of radiomics features, the development of new machine/deep learning models, and combining these for robust outcomes modeling and their applications in radiotherapy. Radiomics features suffer from robustness issues when applied to outcome modeling problems, especially in head and neck computed tomography (CT) images. These images tend to contain streak artifacts due to patients’ dental implants. To investigate the influence of artifacts for radiomics modeling performance, we firstly developed an automatic artifact detection algorithm using gradient-based hand-crafted features. Then, comparing the radiomics models trained on ‘clean’ and ‘contaminated’ datasets. The second project focused on using hand-crafted radiomics features and conventional machine learning methods for the prediction of overall response and progression-free survival for Y90 treated liver cancer patients. By identifying robust features and embedding prior knowledge in the engineered radiomics features and using bootstrapped LASSO to select robust features, we trained imaging and dose based models for the desired clinical endpoints, highlighting the complementary nature of this information in Y90 outcomes prediction. Combining hand-crafted and machine learnt features can take advantage of both expert domain knowledge and advanced data-driven approaches (e.g., deep learning). Thus, we proposed a new variational autoencoder network framework that modeled radiomics features, clinical factors, and raw CT images for the prediction of intrahepatic recurrence-free and overall survival for hepatocellular carcinoma (HCC) patients in this third project. The proposed approach was compared with widely used Cox proportional hazard model for survival analysis. Our proposed methods achieved significant improvement in terms of the prediction using the c-index metric highlighting the value of advanced modeling techniques in learning from limited and heterogeneous information in actuarial prediction of outcomes. Advances in stereotactic radiation therapy (SBRT) has led to excellent local tumor control with limited toxicities for HCC patients, but intrahepatic recurrence still remains prevalent. As an extension of the third project, we not only hope to predict the time to intrahepatic recurrence, but also the location where the tumor might recur. This will be clinically beneficial for better intervention and optimizing decision making during the process of radiotherapy treatment planning. To address this challenging task, firstly, we proposed an unsupervised registration neural network to register atlas CT to patient simulation CT and obtain the liver’s Couinaud segments for the entire patient cohort. Secondly, a new attention convolutional neural network has been applied to utilize multimodality images (CT, MR and 3D dose distribution) for the prediction of high-risk segments. The results showed much improved efficiency for obtaining segments compared with conventional registration methods and the prediction performance showed promising accuracy for anticipating the recurrence location as well. Overall, this thesis contributed new methods and techniques to improve the utilization of radiomics for personalized radiotherapy. These contributions included new algorithm for detecting artifacts, a joint model of dose with image heterogeneity, combining hand-crafted features with machine learnt features for actuarial radiomics modeling, and a novel approach for predicting location of treatment failure.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163092/1/liswei_1.pd

    Focal Spot, Winter 1986

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1042/thumbnail.jp

    Focal Spot, Spring 1999

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1081/thumbnail.jp

    QUANTITATIVE IMAGING FOR PRECISION MEDICINE IN HEAD AND NECK CANCER PATIENTS

    Get PDF
    The purpose of this work was to determine if prediction models using quantitative imaging measures in head and neck squamous cell carcinoma (HNSCC) patients could be improved when noise due to imaging was reduced. This was investigated separately for salivary gland function using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI), overall survival using computed tomography (CT)-based radiomics, and overall survival using positron emission tomography (PET)-based radiomics. From DCE-MRI, where T1-weighted images are serially acquired after injection of contrast, quantitative measures of diffusion can be obtained from the series of images. Radiomics is the study of the relationship of voxels to one another providing measures of texture from the area of interest. Quantitative information obtained from imaging could help in radiation treatment planning by providing quantifiable spatial information with computational models for assigning dose to regions to improve patient outcome, both survival and quality of life. By reducing the noise within the quantitative data, the prediction accuracy could improve to move this type of work closer to clinical practice. For each imaging modality sources of noise that could impact the patient analysis were identified, quantified, and if possible minimized during the patient analysis. In MRI, a large potential source of uncertainty was the image registration. To evaluate this, both physical and synthetic phantoms were used, which showed that registration of MR images was high, with all root mean square errors below 3 mm. Then, 15 HNSCC patients with pre-, mid-, and post-treatment DCE-MRI scans were evaluated. However, differences in algorithm output were found to be a large source of noise as different algorithms could not consistently rank patients as above or below the median for quantitative metrics from DCE-MRI. Therefore, further analysis using this modality was not pursued. In CT, a large potential source of noise that could impact patient analysis was the inter-scanner variability. To investigate this a controlled protocol was designed and used to image, along with the local head and chest protocols, a radiomics phantom on 100 CT scanners. This demonstrated that the inter-scanner variability could be reduced by over 50% using a controlled protocol compared to local protocols. Additionally, it was shown that the reconstruction parameters impact feature values while most acquisition parameters do not, therefore, most of this benefit can be achieved using a radiomics reconstruction with no additional dose to the patient. Then to evaluate this impact in patient studies, 726 HNSCC patients with CT images were used to create and test a Cox proportional hazards model for overall survival. Those patients with the same imaging protocol were subset and a new Cox proportional hazards model was created and tested in order to determine if the reduction in noise due to controlling the imaging protocol translated into improved prediction. However, noise between patient populations from different institutions was shown to be larger than the reduction in noise due to a controlled imaging protocol. In PET, a large potential source of noise that could impact patient analysis was the imaging protocol. A phantom scanned on three different scanners and vendors demonstrated that on a single vendor, imaging parameter choices did not affect radiomics feature values, but inter-scanner variances could be large. Then, 686 HNSCC patients with PET images were used to create and test a Cox proportional hazards model for overall survival. Those patients with the same imaging protocol were subset and a new Cox proportional hazards model was created and tested in order to determine if the reduction in noise due to controlling the imaging protocol on a vendor translated into improved prediction. However, no predictive radiomics signature could be determined for any subset of the patient cohort that resulted in significant stratification of patients into high and low risk. This study demonstrated that the imaging variability could be quantified and controlled for in each modality. However, for each modality there were larger sources of noise identified that did not allow for improvement in prediction modeling of salivary gland function or overall survival using quantitative imaging metrics for MRI, CT, or PET

    Focal Spot, Fall/Winter 2000

    Get PDF
    https://digitalcommons.wustl.edu/focal_spot_archives/1086/thumbnail.jp
    • …
    corecore