25 research outputs found
Faster PET reconstruction with non-smooth priors by randomization and preconditioning
Uncompressed clinical data from modern positron emission tomography (PET) scanners are very large, exceeding 350 million data points (projection bins). The last decades have seen tremendous advancements in mathematical imaging tools many of which lead to non-smooth (i.e. non-differentiable) optimization problems which are much harder to solve than smooth optimization problems. Most of these tools have not been translated to clinical PET data, as the state-of-the-art algorithms for non-smooth problems do not scale well to large data. In this work, inspired by big data machine learning applications, we use advanced randomized optimization algorithms to solve the PET reconstruction problem for a very large class of non-smooth priors which includes for example total variation, total generalized variation, directional total variation and various different physical constraints. The proposed algorithm randomly uses subsets of the data and only updates the variables associated with these. While this idea often leads to divergent algorithms, we show that the proposed algorithm does indeed converge for any proper subset selection. Numerically, we show on real PET data (FDG and florbetapir) from a Siemens Biograph mMR that about ten projections and backprojections are sufficient to solve the MAP optimisation problem related to many popular non-smooth priors; thus showing that the proposed algorithm is fast enough to bring these models into routine clinical practice
Versatile regularisation toolkit for iterative image reconstruction with proximal splitting algorithms
Ill-posed image recovery requires regularisation to ensure stability. The presented open-source regularisation toolkit consists of state-of-the-art variational algorithms which can be embedded in a plug-and-play fashion
into the general framework of proximal splitting methods. The packaged regularisers aim to satisfy various prior expectations of the investigated objects, e.g., their structural characteristics, smooth or non-smooth surface morphology.
The flexibility of the toolkit helps with the design of more advanced model-based iterative reconstruction methods
for different imaging modalities while operating with simpler building blocks. The toolkit is written for CPU and
GPU architectures and wrapped for Python/MATLAB. We demonstrate the functionality of the toolkit in application
to Positron Emission Tomography (PET) and X-ray synchrotron computed tomography (CT)
ํด๋ถํ์ ์ ๋ PET ์ฌ๊ตฌ์ฑ: ๋งค๋๋ฝ์ง ์์ ์ฌ์ ํจ์๋ถํฐ ๋ฅ๋ฌ๋ ์ ๊ทผ๊น์ง
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ์๊ณผ๋ํ ์๊ณผํ๊ณผ, 2021. 2. ์ด์ฌ์ฑ.Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsherโs method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on the l1 norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the original l2 and proposed l1 Bowsher priors were conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposed l1 Bowsher prior methods than the original Bowsher prior. The original l2 Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposed l1 Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced by l1 norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI.
Moreover, based on the formulation of l1 Bowsher prior, the unrolled network containing the conventional maximum-likelihood expectation-maximization (ML-EM) module was also proposed. The convolutional layers successfully learned the distribution of anatomically-guided PET images and the EM module corrected the intermediate outputs by comparing them with sinograms. The proposed unrolled network showed better performance than ordinary U-Net, where the regional uptake is less biased and deviated. Therefore, these methods will help improve the PET image quality based on the anatomical side information.์์ ์๋ฐฉ์ถ๋จ์ธต์ดฌ์ / ์๊ธฐ๊ณต๋ช
์์ (PET/MRI) ๋์ ํ๋ ๊ธฐ์ ์ ๋ฐ์ ์ผ๋ก MR ์์์ ๊ธฐ๋ฐ์ผ๋ก ํ ํด๋ถํ์ ์ฌ์ ํจ์๋ก ์ ๊ทํ ๋ PET ์์ ์ฌ๊ตฌ์ฑ ์๊ณ ๋ฆฌ์ฆ์ ๋ํ ์ฌ๋์๋ ํ๊ฐ๊ฐ ์ด๋ฃจ์ด์ก๋ค. ํด๋ถํ ๊ธฐ๋ฐ์ผ๋ก ์ ๊ทํ ๋ PET ์ด๋ฏธ์ง ์ฌ๊ตฌ์ฑ์ ์ํด ์ ์ ๋ ๋ค์ํ ์ฌ์ ์ค 2์ฐจ ํํํ ์ฌ์ ํจ์์ ๊ธฐ๋ฐํ Bowsher์ ๋ฐฉ๋ฒ์ ๋๋๋ก ์ธ๋ถ ๊ตฌ์กฐ์ ๊ณผ๋ํ ํํํ๋ก ์ด๋ ค์์ ๊ฒช๋๋ค. ๋ฐ๋ผ์ ๋ณธ ์ฐ๊ตฌ์์๋ ์๋ Bowsher ๋ฐฉ๋ฒ์ ํ๊ณ๋ฅผ ๊ทน๋ณตํ๊ธฐ ์ํด l1 norm์ ๊ธฐ๋ฐํ Bowsher ์ฌ์ ํจ์์ ๋ฐ๋ณต์ ์ธ ์ฌ๊ฐ์ค์น ๊ธฐ๋ฒ์ ์ ์ํ๋ค. ๋ํ, ์ฐ๋ฆฌ๋ ์ด ๋งค๋๋ฝ์ง ์์ ์ฌ์ ํจ์๋ฅผ ์ด์ฉํ ๋ฐ๋ณต์ ์ด๋ฏธ์ง ์ฌ๊ตฌ์ฑ์ ๋ํด ๋ซํ ํด๋ฅผ ๋์ถํ๋ค. ์๋ l2์ ์ ์ ๋ l1 Bowsher ์ฌ์ ํจ์ ๊ฐ์ ๋น๊ต ์ฐ๊ตฌ๋ ์ปดํจํฐ ์๋ฎฌ๋ ์ด์
๊ณผ ์ค์ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ์ํ๋์๋ค. ์๋ฎฌ๋ ์ด์
๋ฐ ์ค์ ๋ฐ์ดํฐ์์ ๋น์ ์์ ์ธ PET ํก์๋ฅผ ๊ฐ์ง ์์ ๋ณ๋ณ์ ์๋ Bowsher ์ด์ ๋ณด๋ค ์ ์ ๋ l1 Bowsher ์ฌ์ ๋ฐฉ๋ฒ์ผ๋ก ๋ ์ ๊ฐ์ง๋์๋ค. ์๋์ l2 Bowsher๋ ํด๋ถํ์ ์์์์ ๋ณ๋ณ๊ณผ ์ฃผ๋ณ ์กฐ์ง ์ฌ์ด์ ๋ช
ํํ ๋ถ๋ฆฌ๊ฐ ์์ ๋ ์์ ๋ณ๋ณ์์์ PET ๊ฐ๋๋ฅผ ๊ฐ์์ํจ๋ค. ๊ทธ๋ฌ๋ ์ ์ ๋ l1 Bowsher ์ฌ์ ๋ฐฉ๋ฒ์ ํนํ ๋ฐ๋ณต์ ์ฌ๊ฐ์ค์น ๊ธฐ๋ฒ์์ l1 ๋
ธ๋ฆ์ ์ํด ์ ๋๋ ํฌ์์ฑ์ ๊ธฐ์ธํ ํน์ฑ์ผ๋ก ์ธํด ์ข
์๊ณผ ์ฃผ๋ณ ์กฐ์ง ์ฌ์ด์ ๋ ๋์ ๋๋น๋ฅผ ๋ณด์ฌ์ฃผ์๋ค. ๋ํ ์ ์๋ ๋ฐฉ๋ฒ์ PET๊ณผ MRI์ ํด๋ถํ์ ๊ฒฝ๊ณ๊ฐ ์ผ์นํ๋ ์์ญ์์ PET ๊ฐ๋ ์ถ์ ์ ๋ํ ํธํฅ์ด ๋ ๋ฎ๊ณ ํ์ดํผ ํ๋ผ๋ฏธํฐ ์ข
์์ฑ์ด ์ ์์ ๋ณด์ฌ์ฃผ์๋ค.
๋ํ, l1Bowsher ์ฌ์ ํจ์์ ๋ซํ ํด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๊ธฐ์กด์ ML-EM (maximum-likelihood expectation-maximization) ๋ชจ๋์ ํฌํจํ๋ ํผ์ณ์ง ๋คํธ์ํฌ๋ ์ ์๋์๋ค. ์ปจ๋ณผ๋ฃจ์
๋ ์ด์ด๋ ํด๋ถํ์ ์ผ๋ก ์ ๋ ์ฌ๊ตฌ์ฑ๋ PET ์ด๋ฏธ์ง์ ๋ถํฌ๋ฅผ ์ฑ๊ณต์ ์ผ๋ก ํ์ตํ์ผ๋ฉฐ, EM ๋ชจ๋์ ์ค๊ฐ ์ถ๋ ฅ๋ค์ ์ฌ์ด๋
ธ๊ทธ๋จ๊ณผ ๋น๊ตํ์ฌ ๊ฒฐ๊ณผ ์ด๋ฏธ์ง๊ฐ ์ ๋ค์ด๋ง๊ฒ ์์ ํ๋ค. ์ ์๋ ํผ์ณ์ง ๋คํธ์ํฌ๋ ์ง์ญ์ ํก์์ ๋์ด ๋ ํธํฅ๋๊ณ ํธ์ฐจ๊ฐ ์ ์ด, ์ผ๋ฐ U-Net๋ณด๋ค ๋ ๋์ ์ฑ๋ฅ์ ๋ณด์ฌ์ฃผ์๋ค. ๋ฐ๋ผ์ ์ด๋ฌํ ๋ฐฉ๋ฒ๋ค์ ํด๋ถํ์ ์ ๋ณด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก PET ์ด๋ฏธ์ง ํ์ง์ ํฅ์์ํค๋ ๋ฐ ์ ์ฉํ ๊ฒ์ด๋ค.Chapter 1. Introduction 1
1.1. Backgrounds 1
1.1.1. Positron Emission Tomography 1
1.1.2. Maximum a Posterior Reconstruction 1
1.1.3. Anatomical Prior 2
1.1.4. Proposed l_1 Bowsher Prior 3
1.1.5. Deep Learning for MR-less Application 4
1.2. Purpose of the Research 4
Chapter 2. Anatomically-guided PET Reconstruction Using Bowsher Prior 6
2.1. Backgrounds 6
2.1.1. PET Data Model 6
2.1.2. Original Bowsher Prior 7
2.2. Methods and Materials 8
2.2.1. Proposed l_1 Bowsher Prior 8
2.2.2. Iterative Reweighting 13
2.2.3. Computer Simulations 15
2.2.4. Human Data 16
2.2.5. Image Analysis 17
2.3. Results 19
2.3.1. Simulation with Brain Phantom 19
2.3.2.Human Data 20
2.4. Discussions 25
Chapter 3. Deep Learning Approach for Anatomically-guided PET Reconstruction 31
3.1. Backgrounds 31
3.2. Methods and Materials 33
3.2.1. Douglas-Rachford Splitting 33
3.2.2. Network Architecture 34
3.2.3. Dataset and Training Details 35
3.2.4. Image Analysis 36
3.3. Results 37
3.4. Discussions 38
Chapter 4. Conclusions 40
Bibliography 41
Abstract in Korean (๊ตญ๋ฌธ ์ด๋ก) 52Docto
Stochastic Optimisation Methods Applied to PET Image Reconstruction
Positron Emission Tomography (PET) is a medical imaging technique that is used to pro- vide functional information regarding physiological processes. Statistical PET reconstruc- tion attempts to estimate the distribution of radiotracer in the body but this methodology is generally computationally demanding because of the use of iterative algorithms. These algorithms are often accelerated by the utilisation of data subsets, which may result in con- vergence to a limit set rather than the unique solution. Methods exist to relax the update step sizes of subset algorithms but they introduce additional heuristic parameters that may result in extended reconstruction times. This work investigates novel methods to modify subset algorithms to converge to the unique solution while maintaining the acceleration benefits of subset methods.
This work begins with a study of an automatic method for increasing subset sizes, called AutoSubsets. This algorithm measures the divergence between two distinct data subset update directions and, if significant, the subset size is increased for future updates. The algorithm is evaluated using both projection and list mode data. The algorithmโs use of small initial subsets benefits early reconstruction but unfortunately, at later updates, the subsets size increases too early, which impedes convergence rates.
The main part of this work investigates the application of stochastic variance reduction optimisation algorithms to PET image reconstruction. These algorithms reduce variance due to the use of subsets by incorporating previously computed subset gradients into the update direction. The algorithms are adapted for the application to PET reconstruction. This study evaluates the reconstruction performance of these algorithms when applied to various 3D non-TOF PET simulated, phantom and patient data sets. The impact of a number of algorithm parameters are explored, which includes: subset selection methodologies, the number of subsets, step size methodologies and preconditioners. The results indicate that these stochastic variance reduction algorithms demonstrate superior performance after only a few epochs when compared to a standard PET reconstruction algorithm
On the convergence and sampling of randomized primal-dual algorithms and their application to parallel MRI reconstruction
Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm to efficiently
solve a wide class of nonsmooth large-scale optimization problems. In this
paper we contribute to its theoretical foundations and prove its almost sure
convergence for convex but neither necessarily strongly convex nor smooth
functionals. We also prove its convergence for any sampling. In addition, we
study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data
from different coils are randomly selected at each iteration. We apply SPDHG
using a wide range of random sampling methods and compare its performance
across a range of settings, including mini-batch size and step size parameters.
We show that the sampling can significantly affect the convergence speed of
SPDHG and for many cases an optimal sampling can be identified
A Fast Convergent Ordered-Subsets Algorithm with Subiteration-Dependent Preconditioners for PET Image Reconstruction
We investigated the imaging performance of a fast convergent ordered-subsets
algorithm with subiteration-dependent preconditioners (SDPs) for positron
emission tomography (PET) image reconstruction. In particular, we considered
the use of SDP with the block sequential regularized expectation maximization
(BSREM) approach with the relative difference prior (RDP) regularizer due to
its prior clinical adaptation by vendors. Because the RDP regularization
promotes smoothness in the reconstructed image, the directions of the gradients
in smooth areas more accurately point toward the objective function's minimizer
than those in variable areas. Motivated by this observation, two SDPs have been
designed to increase iteration step-sizes in the smooth areas and reduce
iteration step-sizes in the variable areas relative to a conventional
expectation maximization preconditioner. The momentum technique used for
convergence acceleration can be viewed as a special case of SDP. We have proved
the global convergence of SDP-BSREM algorithms by assuming certain
characteristics of the preconditioner. By means of numerical experiments using
both simulated and clinical PET data, we have shown that the SDP-BSREM
algorithms substantially improve the convergence rate, as compared to
conventional BSREM and a vendor's implementation as Q.Clear. Specifically,
SDP-BSREM algorithms converge 35\%-50\% faster in reaching the same objective
function value than conventional BSREM and commercial Q.Clear algorithms.
Moreover, we showed in phantoms with hot, cold and background regions that the
SDP-BSREM algorithms approached the values of a highly converged reference
image faster than conventional BSREM and commercial Q.Clear algorithms.Comment: 12 pages, 9 figure
The Practicality of Stochastic Optimization in Imaging Inverse Problems
In this work we investigate the practicality of stochastic gradient descent
and recently introduced variants with variance-reduction techniques in imaging
inverse problems. Such algorithms have been shown in the machine learning
literature to have optimal complexities in theory, and provide great
improvement empirically over the deterministic gradient methods. Surprisingly,
in some tasks such as image deblurring, many of such methods fail to converge
faster than the accelerated deterministic gradient methods, even in terms of
epoch counts. We investigate this phenomenon and propose a theory-inspired
mechanism for the practitioners to efficiently characterize whether it is
beneficial for an inverse problem to be solved by stochastic optimization
techniques or not. Using standard tools in numerical linear algebra, we derive
conditions on the spectral structure of the inverse problem for being a
suitable application of stochastic gradient methods. Particularly, we show
that, for an imaging inverse problem, if and only if its Hessain matrix has a
fast-decaying eigenspectrum, then the stochastic gradient methods can be more
advantageous than deterministic methods for solving such a problem. Our results
also provide guidance on choosing appropriately the partition minibatch
schemes, showing that a good minibatch scheme typically has relatively low
correlation within each of the minibatches. Finally, we propose an accelerated
primal-dual SGD algorithm in order to tackle another key bottleneck of
stochastic optimization which is the heavy computation of proximal operators.
The proposed method has fast convergence rate in practice, and is able to
efficiently handle non-smooth regularization terms which are coupled with
linear operators
On the convergence and sampling of randomized primal-dual algorithms and their application to parallel MRI reconstruction
Stochastic Primal-Dual Hybrid Gradient (SPDHG) is an algorithm to efficiently
solve a wide class of nonsmooth large-scale optimization problems. In this
paper we contribute to its theoretical foundations and prove its almost sure
convergence for convex but neither necessarily strongly convex nor smooth
functionals. We also prove its convergence for any sampling. In addition, we
study SPDHG for parallel Magnetic Resonance Imaging reconstruction, where data
from different coils are randomly selected at each iteration. We apply SPDHG
using a wide range of random sampling methods and compare its performance
across a range of settings, including mini-batch size and step size parameters.
We show that the sampling can significantly affect the convergence speed of
SPDHG and for many cases an optimal sampling can be identified