15 research outputs found

    Denoising method for dynamic contrast-enhanced CT perfusion studies using three-dimensional deep image prior as a simultaneous spatial and temporal regularizer

    Full text link
    This study aimed to propose a denoising method for dynamic contrast-enhanced computed tomography (DCE-CT) perfusion studies using a three-dimensional deep image prior (DIP), and to investigate its usefulness in comparison with total variation (TV)-based methods with different regularization parameter (alpha) values through simulation studies. In the proposed DIP method, the DIP was incorporated into the constrained optimization problem for image denoising as a simultaneous spatial and temporal regularizer, which was solved using the alternating direction method of multipliers. In the simulation studies, DCE-CT images were generated using a digital brain phantom and their noise level was varied using the X-ray exposure noise model with different exposures (15, 30, 50, 75, and 100 mAs). Cerebral blood flow (CBF) images were generated from the original contrast enhancement (CE) images and those obtained by the DIP and TV methods using block-circulant singular value decomposition. The quality of the CE images was evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). To compare the CBF images obtained by the different methods and those generated from the ground truth images, linear regression analysis was performed. When using the DIP method, the PSNR and SSIM were not significantly dependent on the exposure, and the SSIM was the highest for all exposures. When using the TV methods, they were significantly dependent on the exposure and alpha values. The results of the linear regression analysis suggested that the linearity of the CBF images obtained by the DIP method was superior to those obtained from the original CE images and by the TV methods. Our preliminary results suggest that the DIP method is useful for denoising DCE-CT images at ultra-low to low exposures and for improving the accuracy of the CBF images generated from them

    Quantitative evaluation of simultaneous spatial and temporal regularization in liver perfusion studies using low-dose dynamic contrast-enhanced CT

    Full text link
    The purpose of this study was to quantitatively evaluate the performance of different simultaneous spatial and temporal regularizers in liver perfusion studies using low-dose dynamic contrast-enhanced computed tomography (DCE-CT). A digital liver phantom was used to simulate chronic liver disease (CLD) and hepatocellular carcinoma (HCC) based on clinical data. Low-dose DCE-CT images were reconstructed using regularizers and a primal-dual algorithm. Subsequently, hepatic perfusion parameter (HPP) images were generated using a dual-input single-compartment model and linear least-squares method. In the CLD model, the effect of regularizers on the input functions (IFs) was examined by calculating the areas under the curves (AUCs) of the IFs, and the HPP estimation accuracy was evaluated by calculating the error and coefficient of variation (CV) between the HPP values obtained by the above methods and true values. In the HCC model, the ratios of the mean HPP values inside and outside the tumor were calculated. The AUCs of IFs decreased with increasing regularization parameter (RP) values. Although the AUC of arterial IF did not significantly depend on the regularizers, that of portal IF did. The error and CV were reduced using low-rank and sparse decomposition (LRSD). Total generalized variation (TGV) combined with LRSD (LTGV) was generally superior to the other regularizers in terms of HPP estimation accuracy and range of available RP values in both the CLD and HCC models. However, striped artifacts were more remarkable in the HPP images obtained by the TGV and LTGV than in those obtained by the other regularizers. The results suggest that the LRSD and LTGV are useful for improving the accuracy of HPP estimation using low-dose DCE-CT and for enhancing its practicality. This study will help select a suitable regularizer and/or RP value for low-dose DCE-CT liver perfusion studies.Comment: 32 pages, 1 table, 10 figure

    Cerebral Blood Flow Measurement Using MRI: Mathematical Regularization and Phantom Evaluation.

    Full text link
    Strokes have been the third most prevalent cause of death in developed countries and the second most prevalent cause of mortality worldwide. Ischemic strokes are by far the most common type of strokes. Verifying the extent and severity of brain damage may be the most challenging problem in the diagnosis and treatment of stroke. Magnetic resonance imaging provides important indicators, such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transition time (MTT), for tissues at the risk for acute strokes. These perfusion-related parameters can be estimated using MR techniques, specifically as dynamic susceptibility contrast (DSC). The DSC technique measures the change in MR signal during the passage of a non-diffusible tracer through the brain tissue. The signal change can be related to the blood flow through a mathematical convolution model, originally suggested by Meier and Zierler, based on indicator-dilution theory. There have been many attempts to find a deconvolution algorithm that overcomes the many limitations, especially, the instability issue of this ill-posed problem. We have suggested a new approach based on the framework of Tikhonov regularization which we will refer to that as "Generalized Tikhonov". Using computer simulations, this method proved promising for blood flow estimation in the presence of the major sources of error: noise, tracer delay and dispersion. In comparison to the standard Tikhonov regularization, our method showed less sensitivity to the changes in regularization parameters that determine the extent of the regularization. To investigate the model we have designed a perfusion phantom which is very similar to actual tissues in terms of perfusion-related parameters such as blood volume, blood flow and the flow transition time. The signal to noise ratio, due to the similarity of the flow volume, is similar to that in actual perfusion measurements. The phantom has the capability of including or excluding the tracer delay and dispersion depending on the desired nature of experiments. Flow at every point of the phantom can be calculated using finite element methods. The perfusion phantom was used to verify the accuracy of the Generalized Tikhonov method and to compare it to the conventional methods.Ph.D.Biomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61616/4/Ebrahimi.pd

    Integration of magnetic resonance spectroscopic imaging into the radiotherapy treatment planning

    Get PDF
    L'objectif de cette thèse est de proposer de nouveaux algorithmes pour surmonter les limitations actuelles et de relever les défis ouverts dans le traitement de l'imagerie spectroscopique par résonance magnétique (ISRM). L'ISRM est une modalité non invasive capable de fournir la distribution spatiale des composés biochimiques (métabolites) utilisés comme biomarqueurs de la maladie. Les informations fournies par l'ISRM peuvent être utilisées pour le diagnostic, le traitement et le suivi de plusieurs maladies telles que le cancer ou des troubles neurologiques. Cette modalité se montre utile en routine clinique notamment lorsqu'il est possible d'en extraire des informations précises et fiables. Malgré les nombreuses publications sur le sujet, l'interprétation des données d'ISRM est toujours un problème difficile en raison de différents facteurs tels que le faible rapport signal sur bruit des signaux, le chevauchement des raies spectrales ou la présence de signaux de nuisance. Cette thèse aborde le problème de l'interprétation des données d'ISRM et la caractérisation de la rechute des patients souffrant de tumeurs cérébrales. Ces objectifs sont abordés à travers une approche méthodologique intégrant des connaissances a priori sur les données d'ISRM avec une régularisation spatio-spectrale. Concernant le cadre applicatif, cette thèse contribue à l'intégration de l'ISRM dans le workflow de traitement en radiothérapie dans le cadre du projet européen SUMMER (Software for the Use of Multi-Modality images in External Radiotherapy) financé par la Commission européenne (FP7-PEOPLE-ITN).The aim of this thesis is to propose new algorithms to overcome the current limitations and to address the open challenges in the processing of magnetic resonance spectroscopic imaging (MRSI) data. MRSI is a non-invasive modality able to provide the spatial distribution of relevant biochemical compounds (metabolites) commonly used as biomarkers of disease. Information provided by MRSI can be used as a valuable insight for the diagnosis, treatment and follow-up of several diseases such as cancer or neurological disorders. Obtaining accurate and reliable information from in vivo MRSI signals is a crucial requirement for the clinical utility of this technique. Despite the numerous publications on the topic, the interpretation of MRSI data is still a challenging problem due to different factors such as the low signal-to-noise ratio (SNR) of the signals, the overlap of spectral lines or the presence of nuisance components. This thesis addresses the problem of interpreting MRSI data and characterizing recurrence in tumor brain patients. These objectives are addressed through a methodological approach based on novel processing methods that incorporate prior knowledge on the MRSI data using a spatio-spectral regularization. As an application, the thesis addresses the integration of MRSI into the radiotherapy treatment workflow within the context of the European project SUMMER (Software for the Use of Multi-Modality images in External Radiotherapy) founded by the European Commission (FP7-PEOPLE-ITN framework)

    Ultrasound Imaging

    Get PDF
    In this book, we present a dozen state of the art developments for ultrasound imaging, for example, hardware implementation, transducer, beamforming, signal processing, measurement of elasticity and diagnosis. The editors would like to thank all the chapter authors, who focused on the publication of this book

    Cross-Modality Feature Learning for Three-Dimensional Brain Image Synthesis

    Get PDF
    Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors such as patient discomfort, increased cost, prolonged scanning time and scanner unavailability. In addition, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. Moreover, independently of how well an imaging system is, the performance of the imaging equipment usually comes to a certain limit through different physical devices. Additional interferences arise (particularly for medical imaging systems), for example, limited acquisition times, sophisticated and costly equipment and patients with severe medical conditions, which also cause image degradation. The acquisitions can be considered as the degraded version of the original high-quality images. In this dissertation, we explore the problems of image super-resolution and cross-modality synthesis for one Magnetic Resonance Imaging (MRI) modality from an image of another MRI modality of the same subject using an image synthesis framework for reconstructing the missing/complex modality data. We develop models and techniques that allow us to connect the domain of source modality data and the domain of target modality data, enabling transformation between elements of the two domains. In particular, we first introduce the models that project both source modality data and target modality data into a common multi-modality feature space in a supervised setting. This common space then allows us to connect cross-modality features that depict a relationship between each other, and we can impose the learned association function that synthesizes any target modality image. Moreover, we develop a weakly-supervised method that takes a few registered multi-modality image pairs as training data and generates the desired modality data without being constrained a large number of multi-modality images collection of well-processed (\textit{e.g.}, skull-stripped and strictly registered) brain data. Finally, we propose an approach that provides a generic way of learning a dual mapping between source and target domains while considering both visually high-fidelity synthesis and task-practicability. We demonstrate that this model can be used to take any arbitrary modality and efficiently synthesize the desirable modality data in an unsupervised manner. We show that these proposed models advance the state-of-the-art on image super-resolution and cross-modality synthesis tasks that need jointly processing of multi-modality images and that we can design the algorithms in ways to generate the practically beneficial data to medical image analysis
    corecore