730 research outputs found

    Retrospective correction of Rigid and Non-Rigid MR motion artifacts using GANs

    Full text link
    Motion artifacts are a primary source of magnetic resonance (MR) image quality deterioration with strong repercussions on diagnostic performance. Currently, MR motion correction is carried out either prospectively, with the help of motion tracking systems, or retrospectively by mainly utilizing computationally expensive iterative algorithms. In this paper, we utilize a new adversarial framework, titled MedGAN, for the joint retrospective correction of rigid and non-rigid motion artifacts in different body regions and without the need for a reference image. MedGAN utilizes a unique combination of non-adversarial losses and a new generator architecture to capture the textures and fine-detailed structures of the desired artifact-free MR images. Quantitative and qualitative comparisons with other adversarial techniques have illustrated the proposed model performance.Comment: 5 pages, 2 figures, under review for the IEEE International Symposium for Biomedical Image

    Potentials and caveats of AI in Hybrid Imaging

    Get PDF
    State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research

    Comparative methods for PET image segmentation in pharyngolaryngeal squamous cell carcinoma

    Get PDF
    Purpose: Several methods have been proposed for the segmentation of 18F-FDG uptake in PET. In this study, we assessed the performance of four categories of 18F-FDG PET image segmentation techniques in pharyngolaryngeal squamous cell carcinoma using clinical studies where the surgical specimen served as the benchmark. Methods: Nine PET image segmentation techniques were compared including: five thresholding methods; the level set technique (active contour); the stochastic expectation-maximization approach; fuzzy clustering-based segmentation (FCM); and a variant of FCM, the spatial wavelet-based algorithm (FCM-SW) which incorporates spatial information during the segmentation process, thus allowing the handling of uptake in heterogeneous lesions. These algorithms were evaluated using clinical studies in which the segmentation results were compared to the 3-D biological tumour volume (BTV) defined by histology in PET images of seven patients with T3-T4 laryngeal squamous cell carcinoma who underwent a total laryngectomy. The macroscopic tumour specimens were collected "en blocโ€, frozen and cut into 1.7- to 2-mm thick slices, then digitized for use as reference. Results: The clinical results suggested that four of the thresholding methods and expectation-maximization overestimated the average tumour volume, while a contrast-oriented thresholding method, the level set technique and the FCM-SW algorithm underestimated it, with the FCM-SW algorithm providing relatively the highest accuracy in terms of volume determination (โˆ’5.9โ€‰ยฑโ€‰11.9%) and overlap index. The mean overlap index varied between 0.27 and 0.54 for the different image segmentation techniques. The FCM-SW segmentation technique showed the best compromise in terms of 3-D overlap index and statistical analysis results with values of 0.54 (0.26-0.72) for the overlap index. Conclusion: The BTVs delineated using the FCM-SW segmentation technique were seemingly the most accurate and approximated closely the 3-D BTVs defined using the surgical specimens. Adaptive thresholding techniques need to be calibrated for each PET scanner and acquisition/processing protocol, and should not be used without optimizatio

    ํ•ด๋ถ€ํ•™์  ์œ ๋„ PET ์žฌ๊ตฌ์„ฑ: ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ถ€ํ„ฐ ๋”ฅ๋Ÿฌ๋‹ ์ ‘๊ทผ๊นŒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜๊ณผํ•™๊ณผ, 2021. 2. ์ด์žฌ์„ฑ.Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsherโ€™s method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on the l1 norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the original l2 and proposed l1 Bowsher priors were conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposed l1 Bowsher prior methods than the original Bowsher prior. The original l2 Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposed l1 Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced by l1 norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Moreover, based on the formulation of l1 Bowsher prior, the unrolled network containing the conventional maximum-likelihood expectation-maximization (ML-EM) module was also proposed. The convolutional layers successfully learned the distribution of anatomically-guided PET images and the EM module corrected the intermediate outputs by comparing them with sinograms. The proposed unrolled network showed better performance than ordinary U-Net, where the regional uptake is less biased and deviated. Therefore, these methods will help improve the PET image quality based on the anatomical side information.์–‘์ „์ž๋ฐฉ์ถœ๋‹จ์ธต์ดฌ์˜ / ์ž๊ธฐ๊ณต๋ช…์˜์ƒ (PET/MRI) ๋™์‹œ ํš๋“ ๊ธฐ์ˆ ์˜ ๋ฐœ์ „์œผ๋กœ MR ์˜์ƒ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ•ด๋ถ€ํ•™์  ์‚ฌ์ „ ํ•จ์ˆ˜๋กœ ์ •๊ทœํ™” ๋œ PET ์˜์ƒ ์žฌ๊ตฌ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•œ ์‹ฌ๋„์žˆ๋Š” ํ‰๊ฐ€๊ฐ€ ์ด๋ฃจ์–ด์กŒ๋‹ค. ํ•ด๋ถ€ํ•™ ๊ธฐ๋ฐ˜์œผ๋กœ ์ •๊ทœํ™” ๋œ PET ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์„ ์œ„ํ•ด ์ œ์•ˆ ๋œ ๋‹ค์–‘ํ•œ ์‚ฌ์ „ ์ค‘ 2์ฐจ ํ‰ํ™œํ™” ์‚ฌ์ „ํ•จ์ˆ˜์— ๊ธฐ๋ฐ˜ํ•œ Bowsher์˜ ๋ฐฉ๋ฒ•์€ ๋•Œ๋•Œ๋กœ ์„ธ๋ถ€ ๊ตฌ์กฐ์˜ ๊ณผ๋„ํ•œ ํ‰ํ™œํ™”๋กœ ์–ด๋ ค์›€์„ ๊ฒช๋Š”๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์›๋ž˜ Bowsher ๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด l1 norm์— ๊ธฐ๋ฐ˜ํ•œ Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์™€ ๋ฐ˜๋ณต์ ์ธ ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ์ด ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•œ ๋ฐ˜๋ณต์  ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์— ๋Œ€ํ•ด ๋‹ซํžŒ ํ•ด๋ฅผ ๋„์ถœํ–ˆ๋‹ค. ์›๋ž˜ l2์™€ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜ ๊ฐ„์˜ ๋น„๊ต ์—ฐ๊ตฌ๋Š” ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ์‹ค์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ์‹ค์ œ ๋ฐ์ดํ„ฐ์—์„œ ๋น„์ •์ƒ์ ์ธ PET ํก์ˆ˜๋ฅผ ๊ฐ€์ง„ ์ž‘์€ ๋ณ‘๋ณ€์€ ์›๋ž˜ Bowsher ์ด์ „๋ณด๋‹ค ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์œผ๋กœ ๋” ์ž˜ ๊ฐ์ง€๋˜์—ˆ๋‹ค. ์›๋ž˜์˜ l2 Bowsher๋Š” ํ•ด๋ถ€ํ•™์  ์˜์ƒ์—์„œ ๋ณ‘๋ณ€๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋ช…ํ™•ํ•œ ๋ถ„๋ฆฌ๊ฐ€ ์—†์„ ๋•Œ ์ž‘์€ ๋ณ‘๋ณ€์—์„œ์˜ PET ๊ฐ•๋„๋ฅผ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์€ ํŠนํžˆ ๋ฐ˜๋ณต์  ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์—์„œ l1 ๋…ธ๋ฆ„์— ์˜ํ•ด ์œ ๋„๋œ ํฌ์†Œ์„ฑ์— ๊ธฐ์ธํ•œ ํŠน์„ฑ์œผ๋กœ ์ธํ•ด ์ข…์–‘๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋” ๋‚˜์€ ๋Œ€๋น„๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ PET๊ณผ MRI์˜ ํ•ด๋ถ€ํ•™์  ๊ฒฝ๊ณ„๊ฐ€ ์ผ์น˜ํ•˜๋Š” ์˜์—ญ์—์„œ PET ๊ฐ•๋„ ์ถ”์ •์— ๋Œ€ํ•œ ํŽธํ–ฅ์ด ๋” ๋‚ฎ๊ณ  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ข…์†์„ฑ์ด ์ ์Œ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ, l1Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์˜ ๋‹ซํžŒ ํ•ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ธฐ์กด์˜ ML-EM (maximum-likelihood expectation-maximization) ๋ชจ๋“ˆ์„ ํฌํ•จํ•˜๋Š” ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋„ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋Š” ํ•ด๋ถ€ํ•™์ ์œผ๋กœ ์œ ๋„ ์žฌ๊ตฌ์„ฑ๋œ PET ์ด๋ฏธ์ง€์˜ ๋ถ„ํฌ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ํ•™์Šตํ–ˆ์œผ๋ฉฐ, EM ๋ชจ๋“ˆ์€ ์ค‘๊ฐ„ ์ถœ๋ ฅ๋“ค์„ ์‚ฌ์ด๋…ธ๊ทธ๋žจ๊ณผ ๋น„๊ตํ•˜์—ฌ ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€๊ฐ€ ์ž˜ ๋“ค์–ด๋งž๊ฒŒ ์ˆ˜์ •ํ–ˆ๋‹ค. ์ œ์•ˆ๋œ ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋Š” ์ง€์—ญ์˜ ํก์ˆ˜์„ ๋Ÿ‰์ด ๋œ ํŽธํ–ฅ๋˜๊ณ  ํŽธ์ฐจ๊ฐ€ ์ ์–ด, ์ผ๋ฐ˜ U-Net๋ณด๋‹ค ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ํ•ด๋ถ€ํ•™์  ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ PET ์ด๋ฏธ์ง€ ํ’ˆ์งˆ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐ ์œ ์šฉํ•  ๊ฒƒ์ด๋‹ค.Chapter 1. Introduction 1 1.1. Backgrounds 1 1.1.1. Positron Emission Tomography 1 1.1.2. Maximum a Posterior Reconstruction 1 1.1.3. Anatomical Prior 2 1.1.4. Proposed l_1 Bowsher Prior 3 1.1.5. Deep Learning for MR-less Application 4 1.2. Purpose of the Research 4 Chapter 2. Anatomically-guided PET Reconstruction Using Bowsher Prior 6 2.1. Backgrounds 6 2.1.1. PET Data Model 6 2.1.2. Original Bowsher Prior 7 2.2. Methods and Materials 8 2.2.1. Proposed l_1 Bowsher Prior 8 2.2.2. Iterative Reweighting 13 2.2.3. Computer Simulations 15 2.2.4. Human Data 16 2.2.5. Image Analysis 17 2.3. Results 19 2.3.1. Simulation with Brain Phantom 19 2.3.2.Human Data 20 2.4. Discussions 25 Chapter 3. Deep Learning Approach for Anatomically-guided PET Reconstruction 31 3.1. Backgrounds 31 3.2. Methods and Materials 33 3.2.1. Douglas-Rachford Splitting 33 3.2.2. Network Architecture 34 3.2.3. Dataset and Training Details 35 3.2.4. Image Analysis 36 3.3. Results 37 3.4. Discussions 38 Chapter 4. Conclusions 40 Bibliography 41 Abstract in Korean (๊ตญ๋ฌธ ์ดˆ๋ก) 52Docto

    Generative Models for Preprocessing of Hospital Brain Scans

    Get PDF
    I will in this thesis present novel computational methods for processing routine clinical brain scans. Such scans were originally acquired for qualitative assessment by trained radiologists, and present a number of difficulties for computational models, such as those within common neuroimaging analysis software. The overarching objective of this work is to enable efficient and fully automated analysis of large neuroimaging datasets, of the type currently present in many hospitals worldwide. The methods presented are based on probabilistic, generative models of the observed imaging data, and therefore rely on informative priors and realistic forward models. The first part of the thesis will present a model for image quality improvement, whose key component is a novel prior for multimodal datasets. I will demonstrate its effectiveness for super-resolving thick-sliced clinical MR scans and for denoising CT images and MR-based, multi-parametric mapping acquisitions. I will then show how the same prior can be used for within-subject, intermodal image registration, for more robustly registering large numbers of clinical scans. The second part of the thesis focusses on improved, automatic segmentation and spatial normalisation of routine clinical brain scans. I propose two extensions to a widely used segmentation technique. First, a method for this model to handle missing data, which allows me to predict entirely missing modalities from one, or a few, MR contrasts. Second, a principled way of combining the strengths of probabilistic, generative models with the unprecedented discriminative capability of deep learning. By introducing a convolutional neural network as a Markov random field prior, I can model nonlinear class interactions and learn these using backpropagation. I show that this model is robust to sequence and scanner variability. Finally, I show examples of fitting a population-level, generative model to various neuroimaging data, which can model, e.g., CT scans with haemorrhagic lesions

    PET Reconstruction With an Anatomical MRI Prior Using Parallel Level Sets.

    Get PDF
    The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) offers unique possibilities. In this paper we aim to exploit the high spatial resolution of MRI to enhance the reconstruction of simultaneously acquired PET data. We propose a new prior to incorporate structural side information into a maximum a posteriori reconstruction. The new prior combines the strengths of previously proposed priors for the same problem: it is very efficient in guiding the reconstruction at edges available from the side information and it reduces locally to edge-preserving total variation in the degenerate case when no structural information is available. In addition, this prior is segmentation-free, convex and no a priori assumptions are made on the correlation of edge directions of the PET and MRI images. We present results for a simulated brain phantom and for real data acquired by the Siemens Biograph mMR for a hardware phantom and a clinical scan. The results from simulations show that the new prior has a better trade-off between enhancing common anatomical boundaries and preserving unique features than several other priors. Moreover, it has a better mean absolute bias-to-mean standard deviation trade-off and yields reconstructions with superior relative l2-error and structural similarity index. These findings are underpinned by the real data results from a hardware phantom and a clinical patient confirming that the new prior is capable of promoting well-defined anatomical boundaries.This research was funded by the EPSRC (EP/K005278/1) and EP/H046410/1 and supported by the National Institute for Health Research University College London Hospitals Biomedical Research Centre. M.J.E was supported by an IMPACT studentship funded jointly by Siemens and the UCL Faculty of Engineering Sciences. K.T. and D.A. are partially supported by the EPSRC grant EP/M022587/1.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/TMI.2016.254960

    Multi-Level Canonical Correlation Analysis for Standard-Dose PET Image Estimation

    Get PDF
    Positron emission tomography (PET) images are widely used in many clinical applications such as tumor detection and brain disorder diagnosis. To obtain PET images of diagnostic quality, a sufficient amount of radioactive tracer has to be injected into a living body, which will inevitably increase the risk of radiation exposure. On the other hand, if the tracer dose is considerably reduced, the quality of the resulting images would be significantly degraded. It is of great interest to estimate a standard-dose PET (S-PET) image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality. This may be achieved through mapping both standard-dose and low-dose PET data into a common space and then performing patch based sparse representation. However, a one-size-fits-all common space built from all training patches is unlikely to be optimal for each target S-PET patch, which limits the estimation accuracy. In this paper, we propose a data-driven multi-level Canonical Correlation Analysis (mCCA) scheme to solve this problem. Specifically, a subset of training data that is most useful in estimating a target S-PET patch is identified in each level, and then used in the next level to update common space and improve estimation. Additionally, we also use multi-modal magnetic resonance images to help improve the estimation with complementary information. Validations on phantom and real human brain datasets show that our method effectively estimates S-PET images and well preserves critical clinical quantification measures, such as standard uptake value
    • โ€ฆ
    corecore