9 research outputs found

    Image Recovery Using Partitioned-Separable Paraboloidal Surrogate Coordinate Ascent Algorithms

    Full text link
    Iterative coordinate ascent algorithms have been shown to be useful for image recovery, but are poorly suited to parallel computing due to their sequential nature. This paper presents a new fast converging parallelizable algorithm for image recovery that can be applied to a very broad class of objective functions. This method is based on paraboloidal surrogate functions and a concavity technique. The paraboloidal surrogates simplify the optimization problem. The idea of the concavity technique is to partition pixels into subsets that can be updated in parallel to reduce the computation time. For fast convergence, pixels within each subset are updated sequentially using a coordinate ascent algorithm. The proposed algorithm is guaranteed to monotonically increase the objective function and intrinsically accommodates nonnegativity constraints. A global convergence proof is summarized. Simulation results show that the proposed algorithm requires less elapsed time for convergence than iterative coordinate ascent algorithms. With four parallel processors, the proposed algorithm yields a speedup factor of 3.77 relative to single processor coordinate ascent algorithms for a three-dimensional (3-D) confocal image restoration problem.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86024/1/Fessler72.pd

    Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization

    Get PDF
    The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.Comment: 5 page

    A Noise-Robust Method with Smoothed \ell_1/\ell_2 Regularization for Sparse Moving-Source Mapping

    Full text link
    The method described here performs blind deconvolution of the beamforming output in the frequency domain. To provide accurate blind deconvolution, sparsity priors are introduced with a smooth \ell_1/\ell_2 regularization term. As the mean of the noise in the power spectrum domain is dependent on its variance in the time domain, the proposed method includes a variance estimation step, which allows more robust blind deconvolution. Validation of the method on both simulated and real data, and of its performance, are compared with two well-known methods from the literature: the deconvolution approach for the mapping of acoustic sources, and sound density modeling

    Effective Image Restorations Using a Novel Spatial Adaptive Prior

    Get PDF
    Bayesian or Maximum a posteriori (MAP) approaches can effectively overcome the ill-posed problems of image restoration or deconvolution through incorporating a priori image information. Many restoration methods, such as nonquadratic prior Bayesian restoration and total variation regularization, have been proposed with edge-preserving and noise-removing properties. However, these methods are often inefficient in restoring continuous variation region and suppressing block artifacts. To handle this, this paper proposes a Bayesian restoration approach with a novel spatial adaptive (SA) prior. Through selectively and adaptively incorporating the nonlocal image information into the SA prior model, the proposed method effectively suppress the negative disturbance from irrelevant neighbor pixels, and utilizes the positive regularization from the relevant ones. A two-step restoration algorithm for the proposed approach is also given. Comparative experimentation and analysis demonstrate that, bearing high-quality edge-preserving and noise-removing properties, the proposed restoration also has good deblocking property

    Expectation Propagation for Poisson Data

    Get PDF
    The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g., the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updates the Gaussian approximation to one factor of the posterior distribution by moment matching. We derive explicit update formulas in terms of one-dimensional integrals, and also discuss stable and efficient quadrature rules for evaluating these integrals. The method is showcased on two-dimensional PET images.Comment: 25 pages, to be published at Inverse Problem

    Expectation propagation for Poisson data

    Get PDF
    The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g. the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updates the Gaussian approximation to one factor of the posterior distribution by moment matching. We derive explicit update formulas in terms of one-dimensional integrals, and also discuss stable and efficient quadrature rules for evaluating these integrals. The method is showcased on two-dimensional PET images

    Quantitative Image Reconstruction Methods for Low Signal-To-Noise Ratio Emission Tomography

    Full text link
    Novel internal radionuclide therapies such as radioembolization (RE) with Y-90 loaded microspheres and targeted therapies labeled with Lu-177 offer a unique promise for personalized treatment of cancer because imaging-based pre-treatment dosimetry assessment can be used to determine administered activities, which deliver tumoricidal absorbed doses to lesions while sparing critical organs. At present, however, such therapies are administered with fixed or empiric activities with little or no dosimetry planning. The main reason for lack of dosimetry guided personalized treatment in radionuclide therapies is the challenges and impracticality of quantitative emission tomography imaging and the lack of well established dose-effect relationships, potentially due to inaccuracies in quantitative imaging. While radionuclides for therapy have been chosen for their attractive characteristics for cancer treatment, their suitability for emission tomography imaging is less than ideal. For example, imaging of the almost pure beta emitter, Y-90, involves SPECT via bremsstrahlung photons that have a low and tissue dependent yield or PET via a very low abundance positron emission (32 out of 1 million decays) that leads to a very low true coincidence-rate in the presence of high singles events from bremsstrahlung photons. Lu-177 emits gamma-rays suitable for SPECT, but they are low in intensity (113 keV: 6%, 208 keV: 10%), and only the higher energy emission is generally used because of the large downscatter component associated with the lower energy gamma-ray. The main aim of the research in this thesis is to improve accuracy of quantitative PET and SPECT imaging of therapy radionuclides for dosimetry applications. Although PET is generally considered as superior to SPECT for quantitative imaging, PET imaging of `non-pure' positron emitters can be complex. We focus on quantitative SPECT and PET imaging of two widely used therapy radionuclides, Lu-177 and Y-90, that have challenges associated with low count-rates. The long term goal of our work is to apply the methods we develop to patient imaging for dosimetry based planning to optimize the treatment either before therapy or after each cycle of therapy. For Y-90 PET/CT, we developed an image reconstruction formulation that relaxes the conventional image-domain nonnegativity constraint by instead imposing a positivity constraint on the predicted measurement mean that demonstrated improved quantification in simulated patient studies. For Y-90 SPECT/CT, we propose a new SPECT/CT reconstruction formulation including tissue dependent probabilities for bremsstrahlung generation in the system matrix. In addition to above mentioned quantitative image reconstruction methods specifically developed for each modality in Y-90 imaging, we propose a general image reconstruction method using trained regularizer for low-count PET and SPECT that we test on Y-90 and Lu-177 imaging. Our approach starts with the raw projection data and utilizes trained networks in the iterative image formation process. Specifically, we take a mathematics-based approach where we include convolutional neural networks within the iterative reconstruction process arising from an optimization problem. We further extend the trained regularization method by using anatomical side information. The trained regularizer incorporates the anatomical information using the segmentation mask generated by a trained segmentation network where its input is the co-registered CT image. Overall, the emission tomography methods we have proposed in this work are expected to enhance low-count PET and SPECT imaging of therapy radionuclides in patient studies, which will have value in establishing dose – response relationships and developing imaging based dosimetry guided treatment planning strategies in the future.PHDElectrical and Computer EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155171/1/hongki_1.pd

    Accelerated Optimization Algorithms for Statistical 3D X-ray Computed Tomography Image Reconstruction.

    Full text link
    X-ray computed tomography (CT) has been widely celebrated for its ability to visualize patient anatomy, but increasing radiation exposure to patients is a concern. Statistical image reconstruction algorithms in X-ray CT can provide improved image quality for reduced dose levels in contrast to the conventional filtered back-projection (FBP) methods. However, the statistical approach requires substantial computation time. Therefore, this dissertation focuses on developing fast iterative algorithms for statistical reconstruction. Ordered subsets (OS) methods have been used widely in tomography problems, because they reduce the computational cost by using only a subset of the measurement data per iteration. They are already used in commercial PET and SPECT products. However, OS methods require too long a reconstruction time in X-ray CT to be used routinely for every clinical CT scan. In this dissertation, two main approaches are proposed for accelerating OS algorithms, one that uses new optimization transfer approaches and one that combines with momentum algorithms. The first, the separable quadratic surrogates (SQS) methods, one widely used optimization transfer method with OS methods, have been accelerated in three different ways. Among them, a nonuniform (NU) SQS method encouraging larger step sizes for the voxels that are expected to change more has highly accelerated OS methods. Second, combining OS methods and momentum approaches (OS-momentum) in a way that reuses previous updates with almost negligible increased computation resulted in a very fast convergence rate. This version focused on using widely celebrated Nesterov's momentum methods. OS-momentum algorithms sometimes encountered instability, so diminishing step size rule has been adapted for improving the stability while preserving the fast convergence rate. To further accelerate OS-momentum algorithms, this dissertation proposes novel momentum methods that are twice as fast yet have remarkably simple implementations comparable to Nesterov's methods. In addition to OS-type algorithms, one variant of the block coordinate descent (BCD) algorithm, called Axial BCD (ABCD), is investigated, which is specifically designed for 3D CT geometry. Simulated and real patient 3D CT scans are used to examine the acceleration of the proposed algorithms.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/109007/1/kimdongh_1.pd
    corecore