681 research outputs found

    Improved Penalized Likelihood Reconstruction of Anatomically Correlated Mmission Data

    Full text link
    This paper presents a method for incorporating anatomical NMR boundary side information into penalized maximum likelihood (PML) emission image reconstructions. The NMR boundary is parameterized as a periodic spline curve of fixed order and number of knots that is known a priori. Maximum likelihood (ML) estimation of the spline coefficients yields an “extracted” boundary, which is used to define a set of Gibbs weights on the emission image space. These weights, when coupled with a quadratic penalty function, create an edge-preserving penalty that incorporates our prior knowledge effectively. Qualitative analysis demonstrates that our method results in smooth images that do not suffer loss of edge contrast, while quantitative estimates of bias and variance for various values of the smoothing parameter show an improvement over standard quadratically penalized maximum likelihood.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86028/1/Fessler138.pd

    Minimax Emission Computed Tomography using High-Resolution Anatomical Side Information and B-Spline Models

    Full text link
    In this paper a minimax methodology is presented for combining information from two imaging modalities having different intrinsic spatial resolutions. The focus application is emission computed tomography (ECT), a low-resolution modality for reconstruction of radionuclide tracer density, when supplemented by high-resolution anatomical boundary information extracted from a magnetic resonance image (MRI) of the same imaging volume. The MRI boundary within the two-dimensional (2-D) slice of interest is parameterized by a closed planar curve. The Cramer-Rao (CR) lower bound is used to analyze estimation errors for different boundary shapes. Under a spatially inhomogeneous Gibbs field model for the tracer density a representation for the minimax MRI-enhanced tracer density estimator is obtained. It is shown that the estimator is asymptotically equivalent to a penalized maximum likelihood (PML) estimator with resolution-selective Gibbs penalty. Quantitative comparisons are presented using the iterative space alternating generalized expectation maximization (SAGE-FM) algorithm to implement the PML estimator with and without minimax weight averaging.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85822/1/Fessler86.pd

    Regularization for Uniform Spatial Resolution Properties in Penalized-Likelihood Image Reconstruction

    Full text link
    Traditional space-invariant regularization methods in tomographic image reconstruction using penalized-likelihood estimators produce images with nonuniform spatial resolution properties. The local point spread functions that quantify the smoothing properties of such estimators are space variant, asymmetric, and object-dependent even for space invariant imaging systems. The authors propose a new quadratic regularization scheme for tomographic imaging systems that yields increased spatial uniformity and is motivated by the least-squares fitting of a parameterized local impulse response to a desired global response. The authors have developed computationally efficient methods for PET systems with shift-invariant geometric responses. They demonstrate the increased spatial uniformity of this new method versus conventional quadratic regularization schemes in simulated PET thorax scans.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85867/1/Fessler79.pd

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Learning and inverse problems: from theory to solar physics applications

    Get PDF
    The problem of approximating a function from a set of discrete measurements has been extensively studied since the seventies. Our theoretical analysis proposes a formalization of the function approximation problem which allows dealing with inverse problems and supervised kernel learning as two sides of the same coin. The proposed formalization takes into account arbitrary noisy data (deterministically or statistically defined), arbitrary loss functions (possibly seen as a log-likelihood), handling both direct and indirect measurements. The core idea of this part relies on the analogy between statistical learning and inverse problems. One of the main evidences of the connection occurring across these two areas is that regularization methods, usually developed for ill-posed inverse problems, can be used for solving learning problems. Furthermore, spectral regularization convergence rate analyses provided in these two areas, share the same source conditions but are carried out with either increasing number of samples in learning theory or decreasing noise level in inverse problems. Even more in general, regularization via sparsity-enhancing methods is widely used in both areas and it is possible to apply well-known ell1ell_1-penalized methods for solving both learning and inverse problems. In the first part of the Thesis, we analyze such a connection at three levels: (1) at an infinite dimensional level, we define an abstract function approximation problem from which the two problems can be derived; (2) at a discrete level, we provide a unified formulation according to a suitable definition of sampling; and (3) at a convergence rates level, we provide a comparison between convergence rates given in the two areas, by quantifying the relation between the noise level and the number of samples. In the second part of the Thesis, we focus on a specific class of problems where measurements are distributed according to a Poisson law. We provide a data-driven, asymptotically unbiased, and globally quadratic approximation of the Kullback-Leibler divergence and we propose Lasso-type methods for solving sparse Poisson regression problems, named PRiL for Poisson Reweighed Lasso and an adaptive version of this method, named APRiL for Adaptive Poisson Reweighted Lasso, proving consistency properties in estimation and variable selection, respectively. Finally we consider two problems in solar physics: 1) the problem of forecasting solar flares (learning application) and 2) the desaturation problem of solar flare images (inverse problem application). The first application concerns the prediction of solar storms using images of the magnetic field on the sun, in particular physics-based features extracted from active regions from data provided by Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). The second application concerns the reconstruction problem of Extreme Ultra-Violet (EUV) solar flare images recorded by a second instrument on board SDO, the Atmospheric Imaging Assembly (AIA). We propose a novel sparsity-enhancing method SE-DESAT to reconstruct images affected by saturation and diffraction, without using any a priori estimate of the background solar activity

    Sparsity Promoting Regularization for Effective Noise Suppression in SPECT Image Reconstruction

    Get PDF
    The purpose of this research is to develop an advanced reconstruction method for low-count, hence high-noise, Single-Photon Emission Computed Tomography (SPECT) image reconstruction. It consists of a novel reconstruction model to suppress noise while conducting reconstruction and an efficient algorithm to solve the model. A novel regularizer is introduced as the nonconvex denoising term based on the approximate sparsity of the image under a geometric tight frame transform domain. The deblurring term is based on the negative log-likelihood of the SPECT data model. To solve the resulting nonconvex optimization problem a Preconditioned Fixed-point Proximity Algorithm (PFPA) is introduced. We prove that under appropriate assumptions, PFPA converges to a local solution of the optimization problem at a global O (1/k) convergence rate. Substantial numerical results for simulation data are presented to demonstrate the superiority of the proposed method in denoising, suppressing artifacts and reconstruction accuracy. We simulate noisy 2D SPECT data from two phantoms: hot Gaussian spheres on random lumpy warm background, and the anthropomorphic brain phantom, at high- and low-noise levels (64k and 90k counts, respectively), and reconstruct them using PFPA. We also perform limited comparative studies with selected competing state-of-the-art total variation (TV) and higher-order TV (HOTV) transform-based methods, and widely used post-filtered maximum-likelihood expectation-maximization. We investigate imaging performance of these methods using: Contrast-to-Noise Ratio (CNR), Ensemble Variance Images (EVI), Background Ensemble Noise (BEN), Normalized Mean-Square Error (NMSE), and Channelized Hotelling Observer (CHO) detectability. Each of the competing methods is independently optimized for each metric. We establish that the proposed method outperforms the other approaches in all image quality metrics except NMSE where it is matched by HOTV. The superiority of the proposed method is especially evident in the CHO detectability tests results. We also perform qualitative image evaluation for presence and severity of image artifacts where it also performs better in terms of suppressing staircase artifacts, as compared to TV methods. However, edge artifacts on high-contrast regions persist. We conclude that the proposed method may offer a powerful tool for detection tasks in high-noise SPECT imaging
    • …
    corecore