25 research outputs found

    Faster PET reconstruction with non-smooth priors by randomization and preconditioning

    Get PDF
    Uncompressed clinical data from modern positron emission tomography (PET) scanners are very large, exceeding 350 million data points (projection bins). The last decades have seen tremendous advancements in mathematical imaging tools many of which lead to non-smooth (i.e. non-differentiable) optimization problems which are much harder to solve than smooth optimization problems. Most of these tools have not been translated to clinical PET data, as the state-of-the-art algorithms for non-smooth problems do not scale well to large data. In this work, inspired by big data machine learning applications, we use advanced randomized optimization algorithms to solve the PET reconstruction problem for a very large class of non-smooth priors which includes for example total variation, total generalized variation, directional total variation and various different physical constraints. The proposed algorithm randomly uses subsets of the data and only updates the variables associated with these. While this idea often leads to divergent algorithms, we show that the proposed algorithm does indeed converge for any proper subset selection. Numerically, we show on real PET data (FDG and florbetapir) from a Siemens Biograph mMR that about ten projections and backprojections are sufficient to solve the MAP optimisation problem related to many popular non-smooth priors; thus showing that the proposed algorithm is fast enough to bring these models into routine clinical practice

    Versatile regularisation toolkit for iterative image reconstruction with proximal splitting algorithms

    Get PDF
    Ill-posed image recovery requires regularisation to ensure stability. The presented open-source regularisation toolkit consists of state-of-the-art variational algorithms which can be embedded in a plug-and-play fashion into the general framework of proximal splitting methods. The packaged regularisers aim to satisfy various prior expectations of the investigated objects, e.g., their structural characteristics, smooth or non-smooth surface morphology. The flexibility of the toolkit helps with the design of more advanced model-based iterative reconstruction methods for different imaging modalities while operating with simpler building blocks. The toolkit is written for CPU and GPU architectures and wrapped for Python/MATLAB. We demonstrate the functionality of the toolkit in application to Positron Emission Tomography (PET) and X-ray synchrotron computed tomography (CT)

    ํ•ด๋ถ€ํ•™์  ์œ ๋„ PET ์žฌ๊ตฌ์„ฑ: ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ถ€ํ„ฐ ๋”ฅ๋Ÿฌ๋‹ ์ ‘๊ทผ๊นŒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜๊ณผํ•™๊ณผ, 2021. 2. ์ด์žฌ์„ฑ.Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsherโ€™s method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on the l1 norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the original l2 and proposed l1 Bowsher priors were conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposed l1 Bowsher prior methods than the original Bowsher prior. The original l2 Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposed l1 Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced by l1 norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Moreover, based on the formulation of l1 Bowsher prior, the unrolled network containing the conventional maximum-likelihood expectation-maximization (ML-EM) module was also proposed. The convolutional layers successfully learned the distribution of anatomically-guided PET images and the EM module corrected the intermediate outputs by comparing them with sinograms. The proposed unrolled network showed better performance than ordinary U-Net, where the regional uptake is less biased and deviated. Therefore, these methods will help improve the PET image quality based on the anatomical side information.์–‘์ „์ž๋ฐฉ์ถœ๋‹จ์ธต์ดฌ์˜ / ์ž๊ธฐ๊ณต๋ช…์˜์ƒ (PET/MRI) ๋™์‹œ ํš๋“ ๊ธฐ์ˆ ์˜ ๋ฐœ์ „์œผ๋กœ MR ์˜์ƒ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ•ด๋ถ€ํ•™์  ์‚ฌ์ „ ํ•จ์ˆ˜๋กœ ์ •๊ทœํ™” ๋œ PET ์˜์ƒ ์žฌ๊ตฌ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•œ ์‹ฌ๋„์žˆ๋Š” ํ‰๊ฐ€๊ฐ€ ์ด๋ฃจ์–ด์กŒ๋‹ค. ํ•ด๋ถ€ํ•™ ๊ธฐ๋ฐ˜์œผ๋กœ ์ •๊ทœํ™” ๋œ PET ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์„ ์œ„ํ•ด ์ œ์•ˆ ๋œ ๋‹ค์–‘ํ•œ ์‚ฌ์ „ ์ค‘ 2์ฐจ ํ‰ํ™œํ™” ์‚ฌ์ „ํ•จ์ˆ˜์— ๊ธฐ๋ฐ˜ํ•œ Bowsher์˜ ๋ฐฉ๋ฒ•์€ ๋•Œ๋•Œ๋กœ ์„ธ๋ถ€ ๊ตฌ์กฐ์˜ ๊ณผ๋„ํ•œ ํ‰ํ™œํ™”๋กœ ์–ด๋ ค์›€์„ ๊ฒช๋Š”๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์›๋ž˜ Bowsher ๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด l1 norm์— ๊ธฐ๋ฐ˜ํ•œ Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์™€ ๋ฐ˜๋ณต์ ์ธ ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ์ด ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•œ ๋ฐ˜๋ณต์  ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์— ๋Œ€ํ•ด ๋‹ซํžŒ ํ•ด๋ฅผ ๋„์ถœํ–ˆ๋‹ค. ์›๋ž˜ l2์™€ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜ ๊ฐ„์˜ ๋น„๊ต ์—ฐ๊ตฌ๋Š” ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ์‹ค์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ์‹ค์ œ ๋ฐ์ดํ„ฐ์—์„œ ๋น„์ •์ƒ์ ์ธ PET ํก์ˆ˜๋ฅผ ๊ฐ€์ง„ ์ž‘์€ ๋ณ‘๋ณ€์€ ์›๋ž˜ Bowsher ์ด์ „๋ณด๋‹ค ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์œผ๋กœ ๋” ์ž˜ ๊ฐ์ง€๋˜์—ˆ๋‹ค. ์›๋ž˜์˜ l2 Bowsher๋Š” ํ•ด๋ถ€ํ•™์  ์˜์ƒ์—์„œ ๋ณ‘๋ณ€๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋ช…ํ™•ํ•œ ๋ถ„๋ฆฌ๊ฐ€ ์—†์„ ๋•Œ ์ž‘์€ ๋ณ‘๋ณ€์—์„œ์˜ PET ๊ฐ•๋„๋ฅผ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์€ ํŠนํžˆ ๋ฐ˜๋ณต์  ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์—์„œ l1 ๋…ธ๋ฆ„์— ์˜ํ•ด ์œ ๋„๋œ ํฌ์†Œ์„ฑ์— ๊ธฐ์ธํ•œ ํŠน์„ฑ์œผ๋กœ ์ธํ•ด ์ข…์–‘๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋” ๋‚˜์€ ๋Œ€๋น„๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ PET๊ณผ MRI์˜ ํ•ด๋ถ€ํ•™์  ๊ฒฝ๊ณ„๊ฐ€ ์ผ์น˜ํ•˜๋Š” ์˜์—ญ์—์„œ PET ๊ฐ•๋„ ์ถ”์ •์— ๋Œ€ํ•œ ํŽธํ–ฅ์ด ๋” ๋‚ฎ๊ณ  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ข…์†์„ฑ์ด ์ ์Œ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ, l1Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์˜ ๋‹ซํžŒ ํ•ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ธฐ์กด์˜ ML-EM (maximum-likelihood expectation-maximization) ๋ชจ๋“ˆ์„ ํฌํ•จํ•˜๋Š” ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋„ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋Š” ํ•ด๋ถ€ํ•™์ ์œผ๋กœ ์œ ๋„ ์žฌ๊ตฌ์„ฑ๋œ PET ์ด๋ฏธ์ง€์˜ ๋ถ„ํฌ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ํ•™์Šตํ–ˆ์œผ๋ฉฐ, EM ๋ชจ๋“ˆ์€ ์ค‘๊ฐ„ ์ถœ๋ ฅ๋“ค์„ ์‚ฌ์ด๋…ธ๊ทธ๋žจ๊ณผ ๋น„๊ตํ•˜์—ฌ ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€๊ฐ€ ์ž˜ ๋“ค์–ด๋งž๊ฒŒ ์ˆ˜์ •ํ–ˆ๋‹ค. ์ œ์•ˆ๋œ ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋Š” ์ง€์—ญ์˜ ํก์ˆ˜์„ ๋Ÿ‰์ด ๋œ ํŽธํ–ฅ๋˜๊ณ  ํŽธ์ฐจ๊ฐ€ ์ ์–ด, ์ผ๋ฐ˜ U-Net๋ณด๋‹ค ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ํ•ด๋ถ€ํ•™์  ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ PET ์ด๋ฏธ์ง€ ํ’ˆ์งˆ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐ ์œ ์šฉํ•  ๊ฒƒ์ด๋‹ค.Chapter 1. Introduction 1 1.1. Backgrounds 1 1.1.1. Positron Emission Tomography 1 1.1.2. Maximum a Posterior Reconstruction 1 1.1.3. Anatomical Prior 2 1.1.4. Proposed l_1 Bowsher Prior 3 1.1.5. Deep Learning for MR-less Application 4 1.2. Purpose of the Research 4 Chapter 2. Anatomically-guided PET Reconstruction Using Bowsher Prior 6 2.1. Backgrounds 6 2.1.1. PET Data Model 6 2.1.2. Original Bowsher Prior 7 2.2. Methods and Materials 8 2.2.1. Proposed l_1 Bowsher Prior 8 2.2.2. Iterative Reweighting 13 2.2.3. Computer Simulations 15 2.2.4. Human Data 16 2.2.5. Image Analysis 17 2.3. Results 19 2.3.1. Simulation with Brain Phantom 19 2.3.2.Human Data 20 2.4. Discussions 25 Chapter 3. Deep Learning Approach for Anatomically-guided PET Reconstruction 31 3.1. Backgrounds 31 3.2. Methods and Materials 33 3.2.1. Douglas-Rachford Splitting 33 3.2.2. Network Architecture 34 3.2.3. Dataset and Training Details 35 3.2.4. Image Analysis 36 3.3. Results 37 3.4. Discussions 38 Chapter 4. Conclusions 40 Bibliography 41 Abstract in Korean (๊ตญ๋ฌธ ์ดˆ๋ก) 52Docto

    Stochastic Optimisation Methods Applied to PET Image Reconstruction

    Get PDF
    Positron Emission Tomography (PET) is a medical imaging technique that is used to pro- vide functional information regarding physiological processes. Statistical PET reconstruc- tion attempts to estimate the distribution of radiotracer in the body but this methodology is generally computationally demanding because of the use of iterative algorithms. These algorithms are often accelerated by the utilisation of data subsets, which may result in con- vergence to a limit set rather than the unique solution. Methods exist to relax the update step sizes of subset algorithms but they introduce additional heuristic parameters that may result in extended reconstruction times. This work investigates novel methods to modify subset algorithms to converge to the unique solution while maintaining the acceleration benefits of subset methods. This work begins with a study of an automatic method for increasing subset sizes, called AutoSubsets. This algorithm measures the divergence between two distinct data subset update directions and, if significant, the subset size is increased for future updates. The algorithm is evaluated using both projection and list mode data. The algorithmโ€™s use of small initial subsets benefits early reconstruction but unfortunately, at later updates, the subsets size increases too early, which impedes convergence rates. The main part of this work investigates the application of stochastic variance reduction optimisation algorithms to PET image reconstruction. These algorithms reduce variance due to the use of subsets by incorporating previously computed subset gradients into the update direction. The algorithms are adapted for the application to PET reconstruction. This study evaluates the reconstruction performance of these algorithms when applied to various 3D non-TOF PET simulated, phantom and patient data sets. The impact of a number of algorithm parameters are explored, which includes: subset selection methodologies, the number of subsets, step size methodologies and preconditioners. The results indicate that these stochastic variance reduction algorithms demonstrate superior performance after only a few epochs when compared to a standard PET reconstruction algorithm

    On the convergence and sampling of randomized primal-dual algorithms and their application to parallel MRI reconstruction

    Get PDF
    Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm to efficiently solve a wide class of nonsmooth large-scale optimization problems. In this paper we contribute to its theoretical foundations and prove its almost sure convergence for convex but neither necessarily strongly convex nor smooth functionals. We also prove its convergence for any sampling. In addition, we study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data from different coils are randomly selected at each iteration. We apply SPDHG using a wide range of random sampling methods and compare its performance across a range of settings, including mini-batch size and step size parameters. We show that the sampling can significantly affect the convergence speed of SPDHG and for many cases an optimal sampling can be identified

    A Fast Convergent Ordered-Subsets Algorithm with Subiteration-Dependent Preconditioners for PET Image Reconstruction

    Full text link
    We investigated the imaging performance of a fast convergent ordered-subsets algorithm with subiteration-dependent preconditioners (SDPs) for positron emission tomography (PET) image reconstruction. In particular, we considered the use of SDP with the block sequential regularized expectation maximization (BSREM) approach with the relative difference prior (RDP) regularizer due to its prior clinical adaptation by vendors. Because the RDP regularization promotes smoothness in the reconstructed image, the directions of the gradients in smooth areas more accurately point toward the objective function's minimizer than those in variable areas. Motivated by this observation, two SDPs have been designed to increase iteration step-sizes in the smooth areas and reduce iteration step-sizes in the variable areas relative to a conventional expectation maximization preconditioner. The momentum technique used for convergence acceleration can be viewed as a special case of SDP. We have proved the global convergence of SDP-BSREM algorithms by assuming certain characteristics of the preconditioner. By means of numerical experiments using both simulated and clinical PET data, we have shown that the SDP-BSREM algorithms substantially improve the convergence rate, as compared to conventional BSREM and a vendor's implementation as Q.Clear. Specifically, SDP-BSREM algorithms converge 35\%-50\% faster in reaching the same objective function value than conventional BSREM and commercial Q.Clear algorithms. Moreover, we showed in phantoms with hot, cold and background regions that the SDP-BSREM algorithms approached the values of a highly converged reference image faster than conventional BSREM and commercial Q.Clear algorithms.Comment: 12 pages, 9 figure

    The Practicality of Stochastic Optimization in Imaging Inverse Problems

    Get PDF
    In this work we investigate the practicality of stochastic gradient descent and recently introduced variants with variance-reduction techniques in imaging inverse problems. Such algorithms have been shown in the machine learning literature to have optimal complexities in theory, and provide great improvement empirically over the deterministic gradient methods. Surprisingly, in some tasks such as image deblurring, many of such methods fail to converge faster than the accelerated deterministic gradient methods, even in terms of epoch counts. We investigate this phenomenon and propose a theory-inspired mechanism for the practitioners to efficiently characterize whether it is beneficial for an inverse problem to be solved by stochastic optimization techniques or not. Using standard tools in numerical linear algebra, we derive conditions on the spectral structure of the inverse problem for being a suitable application of stochastic gradient methods. Particularly, we show that, for an imaging inverse problem, if and only if its Hessain matrix has a fast-decaying eigenspectrum, then the stochastic gradient methods can be more advantageous than deterministic methods for solving such a problem. Our results also provide guidance on choosing appropriately the partition minibatch schemes, showing that a good minibatch scheme typically has relatively low correlation within each of the minibatches. Finally, we propose an accelerated primal-dual SGD algorithm in order to tackle another key bottleneck of stochastic optimization which is the heavy computation of proximal operators. The proposed method has fast convergence rate in practice, and is able to efficiently handle non-smooth regularization terms which are coupled with linear operators

    On the convergence and sampling of randomized primal-dual algorithms and their application to parallel MRI reconstruction

    Full text link
    Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm to efficiently solve a wide class of nonsmooth large-scale optimization problems. In this paper we contribute to its theoretical foundations and prove its almost sure convergence for convex but neither necessarily strongly convex nor smooth functionals. We also prove its convergence for any sampling. In addition, we study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data from different coils are randomly selected at each iteration. We apply SPDHG using a wide range of random sampling methods and compare its performance across a range of settings, including mini-batch size and step size parameters. We show that the sampling can significantly affect the convergence speed of SPDHG and for many cases an optimal sampling can be identified
    corecore