117 research outputs found

    Image Restoration for Long-Wavelength Imaging Systems

    Get PDF

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Conjugate-Gradient Preconditioning Methods for Shift-Variant PET Image Reconstruction

    Full text link
    Gradient-based iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shift-invariant, i.e., for those with approximately block-Toeplitz or block-circulant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantum-limited optical imaging, the Hessian of the weighted least-squares objective function is quite shift-variant, and circulant preconditioners perform poorly. Additional shift-variance is caused by edge-preserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shift-variant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugate-gradient (CG) iteration. We also propose a new efficient method for the line-search step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85979/1/Fessler85.pd

    Restoration of Poissonian Images Using Alternating Direction Optimization

    Full text link
    Much research has been devoted to the problem of restoring Poissonian images, namely for medical and astronomical applications. However, the restoration of these images using state-of-the-art regularizers (such as those based on multiscale representations or total variation) is still an active research area, since the associated optimization problems are quite challenging. In this paper, we propose an approach to deconvolving Poissonian images, which is based on an alternating direction optimization method. The standard regularization (or maximum a posteriori) restoration criterion, which combines the Poisson log-likelihood with a (non-smooth) convex regularizer (log-prior), leads to hard optimization problems: the log-likelihood is non-quadratic and non-separable, the regularizer is non-smooth, and there is a non-negativity constraint. Using standard convex analysis tools, we present sufficient conditions for existence and uniqueness of solutions of these optimization problems, for several types of regularizers: total-variation, frame-based analysis, and frame-based synthesis. We attack these problems with an instance of the alternating direction method of multipliers (ADMM), which belongs to the family of augmented Lagrangian algorithms. We study sufficient conditions for convergence and show that these are satisfied, either under total-variation or frame-based (analysis and synthesis) regularization. The resulting algorithms are shown to outperform alternative state-of-the-art methods, both in terms of speed and restoration accuracy.Comment: 12 pages, 12 figures, 2 tables. Submitted to the IEEE Transactions on Image Processin

    Bioimage informatics in STED super-resolution microscopy

    Get PDF
    Optical microscopy is living its renaissance. The diffraction limit, although still physically true, plays a minor role in the achievable resolution in far-field fluorescence microscopy. Super-resolution techniques enable fluorescence microscopy at nearly molecular resolution. Modern (super-resolution) microscopy methods rely strongly on software. Software tools are needed all the way from data acquisition, data storage, image reconstruction, restoration and alignment, to quantitative image analysis and image visualization. These tools play a key role in all aspects of microscopy today – and their importance in the coming years is certainly going to increase, when microscopy little-by-little transitions from single cells into more complex and even living model systems. In this thesis, a series of bioimage informatics software tools are introduced for STED super-resolution microscopy. Tomographic reconstruction software, coupled with a novel image acquisition method STED< is shown to enable axial (3D) super-resolution imaging in a standard 2D-STED microscope. Software tools are introduced for STED super-resolution correlative imaging with transmission electron microscopes or atomic force microscopes. A novel method for automatically ranking image quality within microscope image datasets is introduced, and it is utilized to for example select the best images in a STED microscope image dataset.Siirretty Doriast

    Non-uniform resolution and partial volume recovery in tomographic image reconstruction methods

    Get PDF
    Acquired data in tomographic imaging systems are subject to physical or detector based image degrading effects. These effects need to be considered and modeled in order to optimize resolution recovery. However, accurate modeling of the physics of data and acquisition processes still lead to an ill-posed reconstruction problem, because real data is incomplete and noisy. Real images are always a compromise between resolution and noise; therefore, noise processes also need to be fully considered for optimum bias variance trade off. Image degrading effects and noise are generally modeled in the reconstruction methods, while, statistical iterative methods can better model these effects, with noise processes, as compared to the analytical methods. Regularization is used to condition the problem and explicit regularization methods are considered better to model various noise processes with an extended control over the reconstructed image quality. Emission physics through object distribution properties are modeled in form of a prior function. Smoothing and edge-preserving priors have been investigated in detail and it has been shown that smoothing priors over-smooth images in high count areas and result in spatially non-uniform and nonlinear resolution response. Uniform resolution response is desirable for image comparison and other image processing tasks, such as segmentation and registration. This work proposes methods, based on MRPs in MAP estimators, to obtain images with almost uniform and linear resolution characteristics, using nonlinearity of MRPs as a correction tool. Results indicate that MRPs perform better in terms of response linearity, spatial uniformity and parameter sensitivity, as compared to QPs and TV priors. Hybrid priors, comprised of MRPs and QPs, have been developed and analyzed for their activity recovery performance in two popular PVC methods and for an analysis of list-mode data reconstruction methods showing that MPRs perform better than QPs in different situations

    AmbientFlow: Invertible generative models from incomplete, noisy measurements

    Full text link
    Generative models have gained popularity for their potential applications in imaging science, such as image reconstruction, posterior sampling and data sharing. Flow-based generative models are particularly attractive due to their ability to tractably provide exact density estimates along with fast, inexpensive and diverse samples. Training such models, however, requires a large, high quality dataset of objects. In applications such as computed imaging, it is often difficult to acquire such data due to requirements such as long acquisition time or high radiation dose, while acquiring noisy or partially observed measurements of these objects is more feasible. In this work, we propose AmbientFlow, a framework for learning flow-based generative models directly from noisy and incomplete data. Using variational Bayesian methods, a novel framework for establishing flow-based generative models from noisy, incomplete data is proposed. Extensive numerical studies demonstrate the effectiveness of AmbientFlow in correctly learning the object distribution. The utility of AmbientFlow in a downstream inference task of image reconstruction is demonstrated
    • …
    corecore