56 research outputs found
The Secrets of Non-Blind Poisson Deconvolution
Non-blind image deconvolution has been studied for several decades but most
of the existing work focuses on blur instead of noise. In photon-limited
conditions, however, the excessive amount of shot noise makes traditional
deconvolution algorithms fail. In searching for reasons why these methods fail,
we present a systematic analysis of the Poisson non-blind deconvolution
algorithms reported in the literature, covering both classical and deep
learning methods. We compile a list of five "secrets" highlighting the do's and
don'ts when designing algorithms. Based on this analysis, we build a
proof-of-concept method by combining the five secrets. We find that the new
method performs on par with some of the latest methods while outperforming some
older ones.Comment: Under submission at Transactions on Computational Imagin
Block Coordinate Plug-and-Play Methods for Blind Inverse Problems
Plug-and-play (PnP) prior is a well-known class of methods for solving
imaging inverse problems by computing fixed-points of operators combining
physical measurement models and learned image denoisers. While PnP methods have
been extensively used for image recovery with known measurement operators,
there is little work on PnP for solving blind inverse problems. We address this
gap by presenting a new block-coordinate PnP (BC-PnP) method that efficiently
solves this joint estimation problem by introducing learned denoisers as priors
on both the unknown image and the unknown measurement operator. We present a
new convergence theory for BC-PnP compatible with blind inverse problems by
considering nonconvex data-fidelity terms and expansive denoisers. Our theory
analyzes the convergence of BC-PnP to a stationary point of an implicit
function associated with an approximate minimum mean-squared error (MMSE)
denoiser. We numerically validate our method on two blind inverse problems:
automatic coil sensitivity estimation in magnetic resonance imaging (MRI) and
blind image deblurring. Our results show that BC-PnP provides an efficient and
principled framework for using denoisers as PnP priors for jointly estimating
measurement operators and images
Deep Unfolding with Normalizing Flow Priors for Inverse Problems
Many application domains, spanning from computational photography to medical
imaging, require recovery of high-fidelity images from noisy, incomplete or
partial/compressed measurements. State of the art methods for solving these
inverse problems combine deep learning with iterative model-based solvers, a
concept known as deep algorithm unfolding. By combining a-priori knowledge of
the forward measurement model with learned (proximal) mappings based on deep
networks, these methods yield solutions that are both physically feasible
(data-consistent) and perceptually plausible. However, current proximal
mappings only implicitly learn such image priors. In this paper, we propose to
make these image priors fully explicit by embedding deep generative models in
the form of normalizing flows within the unfolded proximal gradient algorithm.
We demonstrate that the proposed method outperforms competitive baselines on
various image recovery tasks, spanning from image denoising to inpainting and
deblurring
Explaining Image Enhancement Black-Box Methods through a Path Planning Based Algorithm
Nowadays, image-to-image translation methods, are the state of the art for
the enhancement of natural images. Even if they usually show high performance
in terms of accuracy, they often suffer from several limitations such as the
generation of artifacts and the scalability to high resolutions. Moreover,
their main drawback is the completely black-box approach that does not allow to
provide the final user with any insight about the enhancement processes
applied. In this paper we present a path planning algorithm which provides a
step-by-step explanation of the output produced by state of the art enhancement
methods, overcoming black-box limitation. This algorithm, called eXIE, uses a
variant of the A* algorithm to emulate the enhancement process of another
method through the application of an equivalent sequence of enhancing
operators. We applied eXIE to explain the output of several state-of-the-art
models trained on the Five-K dataset, obtaining sequences of enhancing
operators able to produce very similar results in terms of performance and
overcoming the huge limitation of poor interpretability of the best performing
algorithms
Unrolling of Graph Total Variation for Image Denoising
While deep learning have enabled effective solutions in image denoising, in general their implementations overly rely on training data and require tuning of a large parameter set. In this thesis, a hybrid design that combines graph signal filtering with feature learning is proposed. It utilizes interpretable analytical low-pass graph filters and employs 80\% fewer parameters than a state-of-the-art DL denoising scheme called DnCNN. Specifically, to construct a graph for graph spectral filtering, a CNN is used to learn features per pixel, then feature distances are computed to establish edge weights. Given a constructed graph, a convex optimization problem for denoising using a graph total variation prior is formulated. Its solution is interpreted in an iterative procedure as a graph low-pass filter with an analytical frequency response. For fast implementation, this response is realized by Lanczos approximation. This method outperformed DnCNN by up to 3dB in PSNR in statistical mistmatch case
A Multi-scale Generalized Shrinkage Threshold Network for Image Blind Deblurring in Remote Sensing
Remote sensing images are essential for many earth science applications, but
their quality can be degraded due to limitations in sensor technology and
complex imaging environments. To address this, various remote sensing image
deblurring methods have been developed to restore sharp, high-quality images
from degraded observational data. However, most traditional model-based
deblurring methods usually require predefined hand-craft prior assumptions,
which are difficult to handle in complex applications, and most deep
learning-based deblurring methods are designed as a black box, lacking
transparency and interpretability. In this work, we propose a novel blind
deblurring learning framework based on alternating iterations of shrinkage
thresholds, alternately updating blurring kernels and images, with the
theoretical foundation of network design. Additionally, we propose a learnable
blur kernel proximal mapping module to improve the blur kernel evaluation in
the kernel domain. Then, we proposed a deep proximal mapping module in the
image domain, which combines a generalized shrinkage threshold operator and a
multi-scale prior feature extraction block. This module also introduces an
attention mechanism to adaptively adjust the prior importance, thus avoiding
the drawbacks of hand-crafted image prior terms. Thus, a novel multi-scale
generalized shrinkage threshold network (MGSTNet) is designed to specifically
focus on learning deep geometric prior features to enhance image restoration.
Experiments demonstrate the superiority of our MGSTNet framework on remote
sensing image datasets compared to existing deblurring methods.Comment: 12 pages
- …