196 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Structure tensor total variation

    Get PDF
    This is the final version of the article. Available from Society for Industrial and Applied Mathematics via the DOI in this record.We introduce a novel generic energy functional that we employ to solve inverse imaging problems within a variational framework. The proposed regularization family, termed as structure tensor total variation (STV), penalizes the eigenvalues of the structure tensor and is suitable for both grayscale and vector-valued images. It generalizes several existing variational penalties, including the total variation seminorm and vectorial extensions of it. Meanwhile, thanks to the structure tensor’s ability to capture first-order information around a local neighborhood, the STV functionals can provide more robust measures of image variation. Further, we prove that the STV regularizers are convex while they also satisfy several invariance properties w.r.t. image transformations. These properties qualify them as ideal candidates for imaging applications. In addition, for the discrete version of the STV functionals we derive an equivalent definition that is based on the patch-based Jacobian operator, a novel linear operator which extends the Jacobian matrix. This alternative definition allow us to derive a dual problem formulation. The duality of the problem paves the way for employing robust tools from convex optimization and enables us to design an efficient and parallelizable optimization algorithm. Finally, we present extensive experiments on various inverse imaging problems, where we compare our regularizers with other competing regularization approaches. Our results are shown to be systematically superior, both quantitatively and visually

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented

    Segmentation-Driven Tomographic Reconstruction.

    Get PDF

    Computational Imaging with Limited Photon Budget

    Get PDF
    The capability of retrieving the image/signal of interest from extremely low photon flux is attractive in scientific, industrial, and medical imaging applications. Conventional imaging modalities and reconstruction algorithms rely on hundreds to thousands of photons per pixel (or per measurement) to ensure enough signal-to-noise (SNR) ratio for extracting the image/signal of interest. Unfortunately, the potential of radiation or photon damage prohibits high SNR measurements in dose-sensitive diagnosis scenarios. In addition, imaging systems utilizing inherently weak signals as contrast mechanism, such as X-ray scattering-based tomography, or attosecond pulse retrieval from the streaking trace, entail prolonged integration time to acquire hundreds of photons, thus rendering high SNR measurement impractical. This dissertation addresses the problem of imaging from limited photon budget when high SNR measurements are either prohibitive or impractical. A statistical image reconstruction framework based on the knowledge of the image-formation process and the noise model of the measurement system has been constructed and successfully demonstrated on two imaging platforms – photon-counting X-ray imaging, and attosecond pulse retrieval. For photon-counting X-ray imaging, the statistical image reconstruction framework achieves high-fidelity X-ray projection and tomographic image reconstruction from as low as 16 photons per pixel on average. The capability of our framework in modeling the reconstruction error opens the opportunity of designing the optimal strategies to distribute a fixed photon budget for region-of-interest (ROI) reconstruction, paving the way for radiation dose management in an imaging-specific task. For attosecond pulse retrieval, a learning-based framework has been incorporated into the statistical image reconstruction to retrieve the attosecond pulses from the noisy streaking traces. Quantitative study on the required signal-to-noise ratio for satisfactory pulse retrieval enabled by our framework provides a guideline to future attosecond streaking experiments. In addition, resolving the ambiguities in the streaking process due to the carrier envelop phase has also been demonstrated with our statistical reconstruction framework

    Large Scale Inverse Problems

    Get PDF
    This book is thesecond volume of a three volume series recording the "Radon Special Semester 2011 on Multiscale Simulation &amp Analysis in Energy and the Environment" that took placein Linz, Austria, October 3-7, 2011. This volume addresses the common ground in the mathematical and computational procedures required for large-scale inverse problems and data assimilation in forefront applications. The solution of inverse problems is fundamental to a wide variety of applications such as weather forecasting, medical tomography, and oil exploration. Regularisation techniques are needed to ensure solutions of sufficient quality to be useful, and soundly theoretically based. This book addresses the common techniques required for all the applications, and is thus truly interdisciplinary. This collection of survey articles focusses on the large inverse problems commonly arising in simulation and forecasting in the earth sciences
    corecore