98 research outputs found

    An Investigation of Stochastic Variance Reduction Algorithms for Relative Difference Penalised 3D PET Image Reconstruction

    Get PDF
    Penalised PET image reconstruction algorithms are often accelerated during early iterations with the use of subsets. However, these methods may exhibit limit cycle behaviour at later iterations due to variations between subsets. Desirable converged images can be achieved for a subclass of these algorithms via the implementation of a relaxed step size sequence, but the heuristic selection of parameters will impact the quality of the image sequence and algorithm convergence rates. In this work, we demonstrate the adaption and application of a class of stochastic variance reduction gradient algorithms for PET image reconstruction using the relative difference penalty and numerically compare convergence performance to BSREM. The two investigated algorithms are: SAGA and SVRG. These algorithms require the retention in memory of recently computed subset gradients, which are utilised in subsequent updates. We present several numerical studies based on Monte Carlo simulated data and a patient data set for fully 3D PET acquisitions. The impact of the number of subsets, different preconditioners and step size methods on the convergence of regions of interest values within the reconstructed images is explored. We observe that when using constant preconditioning, SAGA and SVRG demonstrate reduced variations in voxel values between subsequent updates and are less reliant on step size hyper-parameter selection than BSREM reconstructions. Furthermore, SAGA and SVRG can converge significantly faster to the penalised maximum likelihood solution than BSREM, particularly in low count data

    A General Framework of Large-Scale Convex Optimization Using Jensen Surrogates and Acceleration Techniques

    Get PDF
    In a world where data rates are growing faster than computing power, algorithmic acceleration based on developments in mathematical optimization plays a crucial role in narrowing the gap between the two. As the scale of optimization problems in many fields is getting larger, we need faster optimization methods that not only work well in theory, but also work well in practice by exploiting underlying state-of-the-art computing technology. In this document, we introduce a unified framework of large-scale convex optimization using Jensen surrogates, an iterative optimization method that has been used in different fields since the 1970s. After this general treatment, we present non-asymptotic convergence analysis of this family of methods and the motivation behind developing accelerated variants. Moreover, we discuss widely used acceleration techniques for convex optimization and then investigate acceleration techniques that can be used within the Jensen surrogate framework while proposing several novel acceleration methods. Furthermore, we show that proposed methods perform competitively with or better than state-of-the-art algorithms for several applications including Sparse Linear Regression (Image Deblurring), Positron Emission Tomography, X-Ray Transmission Tomography, Logistic Regression, Sparse Logistic Regression and Automatic Relevance Determination for X-Ray Transmission Tomography

    TOMOGRAPHIC IMAGE RECONSTRUCTION: IMPLEMENTATION, OPTIMIZATION AND COMPARISON IN DIGITAL BREAST TOMOSYNTHESIS

    Get PDF
    Conventional 2D mammography was the most effective approach to detecting early stage breast cancer in the past decades of years. Tomosynthetic breast imaging is a potentially more valuable 3D technique for breast cancer detection. The limitations of current tomosynthesis systems include a longer scanning time than a conventional digital X-ray modality and a low spatial resolution due to the movement of the single X-ray source. Dr.Otto Zhou\u27s group proposed the concept of stationary digital breast tomosynthesis (s-DBT) using a Carbon Nano-Tube (CNT) based X-ray source array. Instead of mechanically moving a single X-ray tube, s-DBT applies a stationary X-ray source array, which generates X-ray beams from different view angles by electronically activating the individual source prepositioned at the corresponding view angle, therefore eliminating the focal spot motion blurring from sources. The scanning speed is determined only by the detector readout time and the number of sources regardless of the angular coverage spans, such that the blur from patient\u27s motion can be reduced due to the quick scan. S-DBT is potentially a promising modality to improve the early breast cancer detection by providing decent image quality with fast scan and low radiation dose. DBT system acquires a limited number of noisy 2D projections over a limited angular range and then mathematically reconstructs a 3D breast. 3D reconstruction is faced with the challenges of cone-beam and flat-panel geometry, highly incomplete sampling and huge reconstructed volume. In this research, we investigated several representative reconstruction methods such as Filtered backprojection method (FBP), Simultaneous algebraic reconstruction technique (SART) and Maximum likelihood (ML). We also compared our proposed statistical iterative reconstruction (IR) with particular prior and computational technique to these representative methods. Of all available reconstruction methods in this research, our proposed statistical IR appears particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description. In the following chapters, we present multiple key techniques of statistical IR to tomosynthesis imaging data to demonstrate significant image quality improvement over conventional techniques. These techniques include the physical modeling with a local voxel-pair based prior with the flexibility in its parameters to fine-tune image quality, the pre-computed parameter κ incorporated with the prior to remove the data dependence and to achieve a predictable resolution property, an effective ray-driven technique to compute the forward and backprojection and an over-sampled ray-driven method to perform high resolution reconstruction with a practical region of interest (ROI) technique. In addition, to solve the estimation problem with a fast computation, we also present a semi-quantitative method to optimize the relaxation parameter in a relaxed order-subsets framework and an optimization transfer based algorithm framework which potentially allows less iterations to achieve an acceptable convergence. The phantom data is acquired with the s-DBT prototype system to assess the performance of these particular techniques and compare our proposed method to those representatives. The value of IR is demonstrated in improving the detectability of low contrast and tiny micro-calcification, in reducing cross plane artifacts, in improving resolution and lowering noise in reconstructed images. In particular, noise power spectrum analysis (NPS) indicates a superior noise spectral property of our proposed statistical IR, especially in the high frequency range. With the decent noise property, statistical IR also provides a remarkable reconstruction MTF in general and in different areas within a focus plane. Although computational load remains a significant challenge for practical development, combined with the advancing computational techniques such as graphic computing, the superior image quality provided by statistical IR will be realized to benefit the diagnostics in real clinical applications

    Improving Statistical Image Reconstruction for Cardiac X-ray Computed Tomography.

    Full text link
    Technological advances in CT imaging pose new challenges such as increased X-ray radiation dose and complexity of image reconstruction. Statistical image reconstruction methods use realistic models that incorporate the physics of the measurements and the statistical properties of the measurement noise, and they have potential to provide better image quality and dose reduction compared to the conventional filtered back-projection (FBP) method. However, statistical methods face several challenges that should be addressed before they can replace the FBP method universally. In this thesis, we develop various methods to overcome these challenges of statistical image reconstruction methods. Rigorous regularization design methods in Fourier domain were proposed to achieve more isotropic and uniform spatial resolution or noise properties. The design framework is general so that users can control the spatial resolution and the noise characteristics of the estimator. In addition, a regularization design method based on the hypothetical geometry concept was introduced to improve resolution or noise uniformity. Proposed designs using the new concept effectively improved the spatial resolution or noise uniformity in the reconstructed image. The hypothetical geometry idea is general enough to be applied to other scan geometries. Statistical weighting modification, based on how much each detector element affects insufficiently sampled region, was proposed to reduce the artifacts without degrading the temporal resolution within the region-of-interest (ROI). Another approach using an additional regularization term, that exploits information from the prior image, was investigated. Both methods effectively removed short-scan artifacts in the reconstructed image. We accelerated the family of ordered-subsets algorithms by introducing a double surrogate so that faster convergence speed can be achieved. Furthermore, we present a variable splitting based algorithm for motion-compensated image reconstruction (MCIR) problem that provides faster convergence compared to the conjugate gradient (CG) method. A sinogram-based motion estimation method that does not require any additional measurements other than the short-scan amount of data was introduced to provide decent initial estimates for the joint estimation. Proposed methods were evaluated using simulation and real patient data, and showed promising results for solving each challenge. Some of these methods can be combined to generate more complete solutions for CT imaging.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/110319/1/janghcho_1.pd

    해부학적 유도 PET 재구성: 매끄럽지 않은 사전 함수부터 딥러닝 접근까지

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 의과대학 의과학과, 2021. 2. 이재성.Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher’s method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on the l1 norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the original l2 and proposed l1 Bowsher priors were conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposed l1 Bowsher prior methods than the original Bowsher prior. The original l2 Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposed l1 Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced by l1 norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Moreover, based on the formulation of l1 Bowsher prior, the unrolled network containing the conventional maximum-likelihood expectation-maximization (ML-EM) module was also proposed. The convolutional layers successfully learned the distribution of anatomically-guided PET images and the EM module corrected the intermediate outputs by comparing them with sinograms. The proposed unrolled network showed better performance than ordinary U-Net, where the regional uptake is less biased and deviated. Therefore, these methods will help improve the PET image quality based on the anatomical side information.양전자방출단층촬영 / 자기공명영상 (PET/MRI) 동시 획득 기술의 발전으로 MR 영상을 기반으로 한 해부학적 사전 함수로 정규화 된 PET 영상 재구성 알고리즘에 대한 심도있는 평가가 이루어졌다. 해부학 기반으로 정규화 된 PET 이미지 재구성을 위해 제안 된 다양한 사전 중 2차 평활화 사전함수에 기반한 Bowsher의 방법은 때때로 세부 구조의 과도한 평활화로 어려움을 겪는다. 따라서 본 연구에서는 원래 Bowsher 방법의 한계를 극복하기 위해 l1 norm에 기반한 Bowsher 사전 함수와 반복적인 재가중치 기법을 제안한다. 또한, 우리는 이 매끄럽지 않은 사전 함수를 이용한 반복적 이미지 재구성에 대해 닫힌 해를 도출했다. 원래 l2와 제안 된 l1 Bowsher 사전 함수 간의 비교 연구는 컴퓨터 시뮬레이션과 실제 데이터를 사용하여 수행되었다. 시뮬레이션 및 실제 데이터에서 비정상적인 PET 흡수를 가진 작은 병변은 원래 Bowsher 이전보다 제안 된 l1 Bowsher 사전 방법으로 더 잘 감지되었다. 원래의 l2 Bowsher는 해부학적 영상에서 병변과 주변 조직 사이에 명확한 분리가 없을 때 작은 병변에서의 PET 강도를 감소시킨다. 그러나 제안 된 l1 Bowsher 사전 방법은 특히 반복적 재가중치 기법에서 l1 노름에 의해 유도된 희소성에 기인한 특성으로 인해 종양과 주변 조직 사이에 더 나은 대비를 보여주었다. 또한 제안된 방법은 PET과 MRI의 해부학적 경계가 일치하는 영역에서 PET 강도 추정에 대한 편향이 더 낮고 하이퍼 파라미터 종속성이 적음을 보여주었다. 또한, l1Bowsher 사전 함수의 닫힌 해를 기반으로 기존의 ML-EM (maximum-likelihood expectation-maximization) 모듈을 포함하는 펼쳐진 네트워크도 제안되었다. 컨볼루션 레이어는 해부학적으로 유도 재구성된 PET 이미지의 분포를 성공적으로 학습했으며, EM 모듈은 중간 출력들을 사이노그램과 비교하여 결과 이미지가 잘 들어맞게 수정했다. 제안된 펼쳐진 네트워크는 지역의 흡수선량이 덜 편향되고 편차가 적어, 일반 U-Net보다 더 나은 성능을 보여주었다. 따라서 이러한 방법들은 해부학적 정보를 기반으로 PET 이미지 품질을 향상시키는 데 유용할 것이다.Chapter 1. Introduction 1 1.1. Backgrounds 1 1.1.1. Positron Emission Tomography 1 1.1.2. Maximum a Posterior Reconstruction 1 1.1.3. Anatomical Prior 2 1.1.4. Proposed l_1 Bowsher Prior 3 1.1.5. Deep Learning for MR-less Application 4 1.2. Purpose of the Research 4 Chapter 2. Anatomically-guided PET Reconstruction Using Bowsher Prior 6 2.1. Backgrounds 6 2.1.1. PET Data Model 6 2.1.2. Original Bowsher Prior 7 2.2. Methods and Materials 8 2.2.1. Proposed l_1 Bowsher Prior 8 2.2.2. Iterative Reweighting 13 2.2.3. Computer Simulations 15 2.2.4. Human Data 16 2.2.5. Image Analysis 17 2.3. Results 19 2.3.1. Simulation with Brain Phantom 19 2.3.2.Human Data 20 2.4. Discussions 25 Chapter 3. Deep Learning Approach for Anatomically-guided PET Reconstruction 31 3.1. Backgrounds 31 3.2. Methods and Materials 33 3.2.1. Douglas-Rachford Splitting 33 3.2.2. Network Architecture 34 3.2.3. Dataset and Training Details 35 3.2.4. Image Analysis 36 3.3. Results 37 3.4. Discussions 38 Chapter 4. Conclusions 40 Bibliography 41 Abstract in Korean (국문 초록) 52Docto

    Model-based X-ray CT Image and Light Field Reconstruction Using Variable Splitting Methods.

    Full text link
    Model-based image reconstruction (MBIR) is a powerful technique for solving ill-posed inverse problems. Compared with direct methods, it can provide better estimates from noisy measurements and from incomplete data, at the cost of much longer computation time. In this work, we focus on accelerating and applying MBIR for solving reconstruction problems, including X-ray computed tomography (CT) image reconstruction and light field reconstruction, using variable splitting based on the augmented Lagrangian (AL) methods. For X-ray CT image reconstruction, we combine the AL method and ordered subsets (OS), a well-known technique in the medical imaging literature for accelerating tomographic reconstruction, by considering a linearized variant of the AL method and propose a fast splitting-based ordered-subset algorithm, OS-LALM, for solving X-ray CT image reconstruction problems with penalized weighted least-squares (PWLS) criterion. Practical issues such as the non-trivial parameter selection of AL methods and remarkable memory overhead when considering the finite difference image variable splitting are carefully studied, and several variants of the proposed algorithm are investigated for solving practical model-based X-ray CT image reconstruction problems. Experimental results show that the proposed algorithm significantly accelerates the convergence of X-ray CT image reconstruction with negligible overhead and greatly reduces the noise-like OS artifacts in the reconstructed image when using many subsets for OS acceleration. For light field reconstruction, considering decomposing the camera imaging process into a linear convolution and a non-linear slicing operations for faster forward projection, we propose to reconstruct light field from a sequence of photos taken with different focus settings, i.e., a focal stack, using an alternating direction method of multipliers (ADMM). To improve the quality of the reconstructed light field, we also propose a signal-independent sparsifying transform by considering the elongated structure of light fields. Flatland simulation results show that our proposed sparse light field prior produces high resolution light field with fine details compared with other existing sparse priors for natural images.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108981/1/hungnien_1.pd

    Accélération d'une approche régularisée de reconstruction en tomographie à rayons X avec réduction des artéfacts métalliques

    Get PDF
    Résumé Ce travail porte sur l'imagerie par tomographie à rayons X des vaisseaux périphériques traités par angioplastie avec implantation d'un tuteur endovasculaire métallique. On cherche à détecter le développement de la resténose en mesurant la lumière du vaisseau sanguin imagé. Cette application nécessite la reconstruction d'images de haute résolution. De plus, la présence du tuteur métallique cause l'apparition d'artéfacts qui nuisent à la précision de la mesure dans les images reconstruites dans les appareils tomographiques utilisés en milieu clinique. On propose donc de réaliser la reconstruction à l'aide d'un algorithme axé sur la maximisation pénalisée de la log-vraisemblance conditionnelle de l'image. Cet algorithme est déduit d'un modèle de formation des données qui tient compte de la variation non linéaire de l'atténuation des photons X dans l'objet selon leur énergie, ainsi que du caractère polychromatique du faisceau X. L'algorithme réduit donc effectivement les artéfacts causés spécifiquement par le tuteur métallique. De plus, il peut être configuré de manière à obtenir un compromis satisfaisant entre la résolution de l'image et la variance de l'image reconstruite, selon le niveau de bruit des données. Cette méthode de reconstruction est reconnue pour donner des images d'excellente qualité. Toutefois, le temps de calcul nécessaire à la convergence de cet algorithme est excessivement long. Le but de ce travail est donc de réduire le temps de calcul de cet algorithme de reconstruction itératif. Cette réduction passe par la critique de la formulation du problème et de la méthode de reconstruction, ainsi que par la mise en oeuvre d'approches alternatives.---------- Abstract This thesis is concerned with X-ray tomography of peripheral vessels that have undergone angioplasty with implantation of an endovascular metal stent. We seek to detect the onset of restenosis by measuring the lumen of the imaged blood vessel. This application requires the reconstruction of high-resolution images. In addition, the presence of a metal stent causes streak artifacts that complicate the lumen measurements in images obtained with the usual algorithms, like those implemented in clinical scanners. A regularized statistical reconstruction algorithm, hinged on the maximization of the conditional log-likelihood of the image, is preferable in this case. We choose a variant deduced from a data formation model that takes into account the nonlinear variation of X~photon attenuation to photon energy, as well as the polychromatic character of the X-ray beam. This algorithm effectively reduces the artifacts specifically caused by the metal structures. Moreover, the algorithm may be set to determine a good compromise between image resolution and variance, according to data noise. This reconstruction method is thus known to yield images of excellent quality. However, the runtime to convergence is excessively long. The goal of this work is to reduce the reconstruction runtime

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Non-uniform resolution and partial volume recovery in tomographic image reconstruction methods

    Get PDF
    Acquired data in tomographic imaging systems are subject to physical or detector based image degrading effects. These effects need to be considered and modeled in order to optimize resolution recovery. However, accurate modeling of the physics of data and acquisition processes still lead to an ill-posed reconstruction problem, because real data is incomplete and noisy. Real images are always a compromise between resolution and noise; therefore, noise processes also need to be fully considered for optimum bias variance trade off. Image degrading effects and noise are generally modeled in the reconstruction methods, while, statistical iterative methods can better model these effects, with noise processes, as compared to the analytical methods. Regularization is used to condition the problem and explicit regularization methods are considered better to model various noise processes with an extended control over the reconstructed image quality. Emission physics through object distribution properties are modeled in form of a prior function. Smoothing and edge-preserving priors have been investigated in detail and it has been shown that smoothing priors over-smooth images in high count areas and result in spatially non-uniform and nonlinear resolution response. Uniform resolution response is desirable for image comparison and other image processing tasks, such as segmentation and registration. This work proposes methods, based on MRPs in MAP estimators, to obtain images with almost uniform and linear resolution characteristics, using nonlinearity of MRPs as a correction tool. Results indicate that MRPs perform better in terms of response linearity, spatial uniformity and parameter sensitivity, as compared to QPs and TV priors. Hybrid priors, comprised of MRPs and QPs, have been developed and analyzed for their activity recovery performance in two popular PVC methods and for an analysis of list-mode data reconstruction methods showing that MPRs perform better than QPs in different situations

    Improving quantification in non-TOF 3D PET/MR by incorporating photon energy information

    Get PDF
    Hybrid PET/MR systems combine functional information obtained from positron emission tomography (PET) and anatomical information from magnetic resonance (MR) imaging. In spite of the advantages that such systems can offer, PET attenuation correction still represents one of the biggest challenges for imaging in the thorax. This is due to the fact that the MR signal is not directly correlated to gamma-photon attenuation. In current practice, pre-defined population-based attenuation values are used. However, this approach is prone to errors in tissues such as the lung where a variability of attenuation values can be found both within and between patients. A way to overcome this limitation is to exploit the fact that stand-alone PET emission data contain information on both the distribution of the radiotracer and photon attenuation. However, attempts to estimate the attenuation map from emission data only have shown limited success unless time-of-flight PET data is available. Several groups have investigated the possibility of using scattered data as an additional source of information to overcome re- construction ambiguities. This thesis presents work to extend the previous methods by using PET emission data acquired at multiple energy windows and incorporating prior information derived from MR. This thesis is organised as follows. We first cover both the literature and mathematical theory relevant to the framework. Then, we present and discuss results on the case of attenu- ation estimation from scattered data only, when the activity distribution is known. We then give an overview of several candidates for joint reconstruction, which reconstruct both the activity and attenuation from scattered and unscattered data. We present extensive results using simulated data and compare the proposed methods to state-of-the-art MLAA from a single energy window acquisition. We conclude with suggestions for future work to bring the proposed method into clinical practice
    corecore