734 research outputs found

    Task adapted reconstruction for inverse problems

    Full text link
    The paper considers the problem of performing a task defined on a model parameter that is only observed indirectly through noisy data in an ill-posed inverse problem. A key aspect is to formalize the steps of reconstruction and task as appropriate estimators (non-randomized decision rules) in statistical estimation problems. The implementation makes use of (deep) neural networks to provide a differentiable parametrization of the family of estimators for both steps. These networks are combined and jointly trained against suitable supervised training data in order to minimize a joint differentiable loss function, resulting in an end-to-end task adapted reconstruction method. The suggested framework is generic, yet adaptable, with a plug-and-play structure for adjusting both the inverse problem and the task at hand. More precisely, the data model (forward operator and statistical model of the noise) associated with the inverse problem is exchangeable, e.g., by using neural network architecture given by a learned iterative method. Furthermore, any task that is encodable as a trainable neural network can be used. The approach is demonstrated on joint tomographic image reconstruction, classification and joint tomographic image reconstruction segmentation

    ํ•ด๋ถ€ํ•™์  ์œ ๋„ PET ์žฌ๊ตฌ์„ฑ: ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ถ€ํ„ฐ ๋”ฅ๋Ÿฌ๋‹ ์ ‘๊ทผ๊นŒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜๊ณผํ•™๊ณผ, 2021. 2. ์ด์žฌ์„ฑ.Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsherโ€™s method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on the l1 norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the original l2 and proposed l1 Bowsher priors were conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposed l1 Bowsher prior methods than the original Bowsher prior. The original l2 Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposed l1 Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced by l1 norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Moreover, based on the formulation of l1 Bowsher prior, the unrolled network containing the conventional maximum-likelihood expectation-maximization (ML-EM) module was also proposed. The convolutional layers successfully learned the distribution of anatomically-guided PET images and the EM module corrected the intermediate outputs by comparing them with sinograms. The proposed unrolled network showed better performance than ordinary U-Net, where the regional uptake is less biased and deviated. Therefore, these methods will help improve the PET image quality based on the anatomical side information.์–‘์ „์ž๋ฐฉ์ถœ๋‹จ์ธต์ดฌ์˜ / ์ž๊ธฐ๊ณต๋ช…์˜์ƒ (PET/MRI) ๋™์‹œ ํš๋“ ๊ธฐ์ˆ ์˜ ๋ฐœ์ „์œผ๋กœ MR ์˜์ƒ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ํ•ด๋ถ€ํ•™์  ์‚ฌ์ „ ํ•จ์ˆ˜๋กœ ์ •๊ทœํ™” ๋œ PET ์˜์ƒ ์žฌ๊ตฌ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋Œ€ํ•œ ์‹ฌ๋„์žˆ๋Š” ํ‰๊ฐ€๊ฐ€ ์ด๋ฃจ์–ด์กŒ๋‹ค. ํ•ด๋ถ€ํ•™ ๊ธฐ๋ฐ˜์œผ๋กœ ์ •๊ทœํ™” ๋œ PET ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์„ ์œ„ํ•ด ์ œ์•ˆ ๋œ ๋‹ค์–‘ํ•œ ์‚ฌ์ „ ์ค‘ 2์ฐจ ํ‰ํ™œํ™” ์‚ฌ์ „ํ•จ์ˆ˜์— ๊ธฐ๋ฐ˜ํ•œ Bowsher์˜ ๋ฐฉ๋ฒ•์€ ๋•Œ๋•Œ๋กœ ์„ธ๋ถ€ ๊ตฌ์กฐ์˜ ๊ณผ๋„ํ•œ ํ‰ํ™œํ™”๋กœ ์–ด๋ ค์›€์„ ๊ฒช๋Š”๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์›๋ž˜ Bowsher ๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด l1 norm์— ๊ธฐ๋ฐ˜ํ•œ Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์™€ ๋ฐ˜๋ณต์ ์ธ ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ, ์šฐ๋ฆฌ๋Š” ์ด ๋งค๋„๋Ÿฝ์ง€ ์•Š์€ ์‚ฌ์ „ ํ•จ์ˆ˜๋ฅผ ์ด์šฉํ•œ ๋ฐ˜๋ณต์  ์ด๋ฏธ์ง€ ์žฌ๊ตฌ์„ฑ์— ๋Œ€ํ•ด ๋‹ซํžŒ ํ•ด๋ฅผ ๋„์ถœํ–ˆ๋‹ค. ์›๋ž˜ l2์™€ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜ ๊ฐ„์˜ ๋น„๊ต ์—ฐ๊ตฌ๋Š” ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ์‹ค์ œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ์‹ค์ œ ๋ฐ์ดํ„ฐ์—์„œ ๋น„์ •์ƒ์ ์ธ PET ํก์ˆ˜๋ฅผ ๊ฐ€์ง„ ์ž‘์€ ๋ณ‘๋ณ€์€ ์›๋ž˜ Bowsher ์ด์ „๋ณด๋‹ค ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์œผ๋กœ ๋” ์ž˜ ๊ฐ์ง€๋˜์—ˆ๋‹ค. ์›๋ž˜์˜ l2 Bowsher๋Š” ํ•ด๋ถ€ํ•™์  ์˜์ƒ์—์„œ ๋ณ‘๋ณ€๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋ช…ํ™•ํ•œ ๋ถ„๋ฆฌ๊ฐ€ ์—†์„ ๋•Œ ์ž‘์€ ๋ณ‘๋ณ€์—์„œ์˜ PET ๊ฐ•๋„๋ฅผ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ œ์•ˆ ๋œ l1 Bowsher ์‚ฌ์ „ ๋ฐฉ๋ฒ•์€ ํŠนํžˆ ๋ฐ˜๋ณต์  ์žฌ๊ฐ€์ค‘์น˜ ๊ธฐ๋ฒ•์—์„œ l1 ๋…ธ๋ฆ„์— ์˜ํ•ด ์œ ๋„๋œ ํฌ์†Œ์„ฑ์— ๊ธฐ์ธํ•œ ํŠน์„ฑ์œผ๋กœ ์ธํ•ด ์ข…์–‘๊ณผ ์ฃผ๋ณ€ ์กฐ์ง ์‚ฌ์ด์— ๋” ๋‚˜์€ ๋Œ€๋น„๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•์€ PET๊ณผ MRI์˜ ํ•ด๋ถ€ํ•™์  ๊ฒฝ๊ณ„๊ฐ€ ์ผ์น˜ํ•˜๋Š” ์˜์—ญ์—์„œ PET ๊ฐ•๋„ ์ถ”์ •์— ๋Œ€ํ•œ ํŽธํ–ฅ์ด ๋” ๋‚ฎ๊ณ  ํ•˜์ดํผ ํŒŒ๋ผ๋ฏธํ„ฐ ์ข…์†์„ฑ์ด ์ ์Œ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ, l1Bowsher ์‚ฌ์ „ ํ•จ์ˆ˜์˜ ๋‹ซํžŒ ํ•ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ธฐ์กด์˜ ML-EM (maximum-likelihood expectation-maximization) ๋ชจ๋“ˆ์„ ํฌํ•จํ•˜๋Š” ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋„ ์ œ์•ˆ๋˜์—ˆ๋‹ค. ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋Š” ํ•ด๋ถ€ํ•™์ ์œผ๋กœ ์œ ๋„ ์žฌ๊ตฌ์„ฑ๋œ PET ์ด๋ฏธ์ง€์˜ ๋ถ„ํฌ๋ฅผ ์„ฑ๊ณต์ ์œผ๋กœ ํ•™์Šตํ–ˆ์œผ๋ฉฐ, EM ๋ชจ๋“ˆ์€ ์ค‘๊ฐ„ ์ถœ๋ ฅ๋“ค์„ ์‚ฌ์ด๋…ธ๊ทธ๋žจ๊ณผ ๋น„๊ตํ•˜์—ฌ ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€๊ฐ€ ์ž˜ ๋“ค์–ด๋งž๊ฒŒ ์ˆ˜์ •ํ–ˆ๋‹ค. ์ œ์•ˆ๋œ ํŽผ์ณ์ง„ ๋„คํŠธ์›Œํฌ๋Š” ์ง€์—ญ์˜ ํก์ˆ˜์„ ๋Ÿ‰์ด ๋œ ํŽธํ–ฅ๋˜๊ณ  ํŽธ์ฐจ๊ฐ€ ์ ์–ด, ์ผ๋ฐ˜ U-Net๋ณด๋‹ค ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์€ ํ•ด๋ถ€ํ•™์  ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ PET ์ด๋ฏธ์ง€ ํ’ˆ์งˆ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐ ์œ ์šฉํ•  ๊ฒƒ์ด๋‹ค.Chapter 1. Introduction 1 1.1. Backgrounds 1 1.1.1. Positron Emission Tomography 1 1.1.2. Maximum a Posterior Reconstruction 1 1.1.3. Anatomical Prior 2 1.1.4. Proposed l_1 Bowsher Prior 3 1.1.5. Deep Learning for MR-less Application 4 1.2. Purpose of the Research 4 Chapter 2. Anatomically-guided PET Reconstruction Using Bowsher Prior 6 2.1. Backgrounds 6 2.1.1. PET Data Model 6 2.1.2. Original Bowsher Prior 7 2.2. Methods and Materials 8 2.2.1. Proposed l_1 Bowsher Prior 8 2.2.2. Iterative Reweighting 13 2.2.3. Computer Simulations 15 2.2.4. Human Data 16 2.2.5. Image Analysis 17 2.3. Results 19 2.3.1. Simulation with Brain Phantom 19 2.3.2.Human Data 20 2.4. Discussions 25 Chapter 3. Deep Learning Approach for Anatomically-guided PET Reconstruction 31 3.1. Backgrounds 31 3.2. Methods and Materials 33 3.2.1. Douglas-Rachford Splitting 33 3.2.2. Network Architecture 34 3.2.3. Dataset and Training Details 35 3.2.4. Image Analysis 36 3.3. Results 37 3.4. Discussions 38 Chapter 4. Conclusions 40 Bibliography 41 Abstract in Korean (๊ตญ๋ฌธ ์ดˆ๋ก) 52Docto

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    DEQ-MPI: A Deep Equilibrium Reconstruction with Learned Consistency for Magnetic Particle Imaging

    Full text link
    Magnetic particle imaging (MPI) offers unparalleled contrast and resolution for tracing magnetic nanoparticles. A common imaging procedure calibrates a system matrix (SM) that is used to reconstruct data from subsequent scans. The ill-posed reconstruction problem can be solved by simultaneously enforcing data consistency based on the SM and regularizing the solution based on an image prior. Traditional hand-crafted priors cannot capture the complex attributes of MPI images, whereas recent MPI methods based on learned priors can suffer from extensive inference times or limited generalization performance. Here, we introduce a novel physics-driven method for MPI reconstruction based on a deep equilibrium model with learned data consistency (DEQ-MPI). DEQ-MPI reconstructs images by augmenting neural networks into an iterative optimization, as inspired by unrolling methods in deep learning. Yet, conventional unrolling methods are computationally restricted to few iterations resulting in non-convergent solutions, and they use hand-crafted consistency measures that can yield suboptimal capture of the data distribution. DEQ-MPI instead trains an implicit mapping to maximize the quality of a convergent solution, and it incorporates a learned consistency measure to better account for the data distribution. Demonstrations on simulated and experimental data indicate that DEQ-MPI achieves superior image quality and competitive inference time to state-of-the-art MPI reconstruction methods
    • โ€ฆ
    corecore