5,867 research outputs found

    Predicting Task-­specific Performance for Iterative Reconstruction in Computed Tomography

    Get PDF
    <p>The cross-sectional images of computed tomography (CT) are calculated from a series of projections using reconstruction methods. Recently introduced on clinical CT scanners, iterative reconstruction (IR) method enables potential patient dose reduction with significantly reduced image noise, but is limited by its "waxy" texture and nonlinear nature. To balance the advantages and disadvantages of IR, evaluations are needed with diagnostic accuracy as the endpoint. Moreover, evaluations need to take into consideration the type of the imaging task (detection and quantification), the properties of the task (lesion size, contrast, edge profile, etc.), and other acquisition and reconstruction parameters. </p><p>To evaluate detection tasks, the more acceptable method is observer studies, which involve image preparation, graphical user interface setup, manual detection and scoring, and statistical analyses. Because such evaluation can be time consuming, mathematical models have been proposed to efficiently predict observer performance in terms of a detectability index (d'). However, certain assumptions such as system linearity may need to be made, thus limiting the application of the models to potentially nonlinear IR. For evaluating quantification tasks, conventional method can also be time consuming as it usually involves experiments with anthropomorphic phantoms. A mathematical model similar to d' was therefore proposed for the prediction of volume quantification performance, named the estimability index (e'). However, this prior model was limited in its modeling of the task, modeling of the volume segmentation process, and assumption of system linearity.</p><p>To expand prior d' and e' models to the evaluations of IR performance, the first part of this dissertation developed an experimental methodology to characterize image noise and resolution in a manner that was relevant to nonlinear IR. Results showed that this method was efficient and meaningful in characterizing the system performance accounting for the non-linearity of IR at multiple contrast and noise levels. It was also shown that when certain criteria were met, the measurement error could be controlled to be less than 10% to allow challenging measuring conditions with low object contrast and high image noise.</p><p>The second part of this dissertation incorporated the noise and resolution characterizations developed in the first part into the d' calculations, and evaluated the performance of IR and conventional filtered backprojection (FBP) for detection tasks. Results showed that compared to FBP, IR required less dose to achieve a threshold performance accuracy level, therefore potentially reducing the required dose. The dose saving potential of IR was not constant, but dependent on the task properties, with subtle tasks (small size and low contrast) enabling more dose saving than conspicuous tasks. Results also showed that at a fixed dose level, IR allowed more subtle tasks to exceed a threshold performance level, demonstrating the overall superior performance of IR for detection tasks.</p><p>The third part of this dissertation evaluated IR performance in volume quantification tasks with conventional experimental method. The volume quantification performance of IR was measured using an anthropomorphic chest phantom and compared to FBP in terms of accuracy and precision. Results showed that across a wide range of dose and slice thickness, IR led to accuracy significantly different from that of FBP, highlighting the importance of calibrating or expanding current segmentation software to incorporate the image characteristics of IR. Results also showed that despite IR's great noise reduction in uniform regions, IR in general had quantification precision similar to that of FBP, possibly due to IR's diminished noise reduction at edges (such as nodule boundaries) and IR's loss of resolution at low dose levels. </p><p>The last part of this dissertation mathematically predicted IR performance in volume quantification tasks with an e' model that was extended in three respects, including the task modeling, the segmentation software modeling, and the characterizations of noise and resolution properties. Results showed that the extended e' model correlated with experimental precision across a range of image acquisition protocols, nodule sizes, and segmentation software. In addition, compared to experimental assessments of quantification performance, e' was significantly reduced in computational time, such that it can be easily employed in clinical studies to verify quantitative compliance and to optimize clinical protocols for CT volumetry.</p><p>The research in this dissertation has two important clinical implications. First, because d' values reflect the percent of detection accuracy and e' values reflect the quantification precision, this work provides a framework for evaluating IR with diagnostic accuracy as the endpoint. Second, because the calculations of d' and e' models are much more efficient compared to conventional observer studies, the clinical protocols with IR can be optimized in a timely fashion, and the compliance of clinical performance can be examined routinely.</p>Dissertatio

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    Coronary computed tomography angiography using model-based iterative reconstruction algorithms in the detection of significant coronary stenosis : how the plaque type influences the diagnostic performance

    Get PDF
    Purpose: To evaluate the ability of coronary computed tomography angiography (CCTA) with model-based iterative reconstruction (MBIR) algorithm in detecting significant coronary artery stenosis compared with invasive coronary angiography (ICA). Material and methods: We retrospectively identified 55 patients who underwent CCTA using the MBIR algorithm with evidence of at least one significant stenosis (≄ 50%) and an ICA within three months. Patients were stratified based on calcium score; stenoses were classified by type and by coronary segment involved. Dose-length-product was compared with the literature data obtained with previous reconstruction algorithms. Coronary artery stenosis was estimated on ICAs based on a qualitative method. Results: CCTA data were confirmed by ICA in 89% of subjects, and in 73% and 94% of patients with CS < 400 and ≄ 400, respectively. ICA confirmed 81% of calcific stenoses, 91% of mixed, and 67% of soft plaques. Both the dose exposure of patients with prospective acquisition (34) and the exposure of the whole population were significantly lower than the standard of reference (p < 0.001 and p = 0.007). Conclusions: CCTA with MBIR is valuable in detecting significant coronary artery stenosis with a solid reduction of radiation dose. Diagnostic performance was influenced by plaque composition, being lower compared with ICA for patients with lower CAC score and soft plaques; the visualisation of an intraluminal hypodensity could cause false positives, particularly in D1 and MO segments

    Measurement Variability in Treatment Response Determination for Non-Small Cell Lung Cancer: Improvements using Radiomics

    Get PDF
    Multimodality imaging measurements of treatment response are critical for clinical practice, oncology trials, and the evaluation of new treatment modalities. The current standard for determining treatment response in non-small cell lung cancer (NSCLC) is based on tumor size using the RECIST criteria. Molecular targeted agents and immunotherapies often cause morphological change without reduction of tumor size. Therefore, it is difficult to evaluate therapeutic response by conventional methods. Radiomics is the study of cancer imaging features that are extracted using machine learning and other semantic features. This method can provide comprehensive information on tumor phenotypes and can be used to assess therapeutic response in this new age of immunotherapy. Delta radiomics, which evaluates the longitudinal changes in radiomics features, shows potential in gauging treatment response in NSCLC. It is well known that quantitative measurement methods may be subject to substantial variability due to differences in technical factors and require standardization. In this review, we describe measurement variability in the evaluation of NSCLC and the emerging role of radiomics. © 2019 Wolters Kluwer Health, Inc. All rights reserved

    Dual-Domain Coarse-to-Fine Progressive Estimation Network for Simultaneous Denoising, Limited-View Reconstruction, and Attenuation Correction of Cardiac SPECT

    Full text link
    Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps (Ό\mu-maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free Ό\mu-map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived Ό\mu-maps. Our experiments demonstrate DuDoCFNet's superior accuracy in estimating projections, generating Ό\mu-maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask.Comment: 11 Pages, 10 figures, 4 table

    Fast Variance Prediction for Iteratively Reconstructed CT with Applications to Tube Current Modulation.

    Full text link
    X-ray computed tomography (CT) is an important, widely-used medical imaging modality. A primary concern with the increasing use of CT is the ionizing radiation dose incurred by the patient. Statistical reconstruction methods are able to improve noise and resolution in CT images compared to traditional filter backprojection (FBP) based reconstruction methods, which allows for a reduced radiation dose. Compared to FBP-based methods, statistical reconstruction requires greater computational time and the statistical properties of resulting images are more difficult to analyze. Statistical reconstruction has parameters that must be correctly chosen to produce high-quality images. The variance of the reconstructed image has been used to choose these parameters, but this has previously been very time-consuming to compute. In this work, we use approximations to the local frequency response (LFR) of CT projection and backprojection to predict the variance of statistically reconstructed CT images. Compared to the empirical variance derived from multiple simulated reconstruction realizations, our method is as accurate as the currently available methods of variance prediction while being computable for thousands of voxels per second, faster than these previous methods by a factor of over ten thousand. We also compare our method to empirical variance maps produced from an ensemble of reconstructions from real sinogram data. The LFR can also be used to predict the power spectrum of the noise and the local frequency response of the reconstruction. Tube current modulation (TCM), the redistribution of X-ray dose in CT between different views of a patient, has been demonstrated to reduce dose when the modulation is well-designed. TCM methods currently in use were designed assuming FBP-based image reconstruction. We use our LFR approximation to derive fast methods for predicting the SNR of linear observers of a statistically reconstructed CT image. Using these fast observability and variance prediction methods, we derive TCM methods specific to statistical reconstruction that, in theory, potentially reduce radiation dose by 20% compared to FBP-specific TCM methods.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111463/1/smschm_1.pd
    • 

    corecore