564 research outputs found
Discriminative Transfer Learning for General Image Restoration
Recently, several discriminative learning approaches have been proposed for
effective image restoration, achieving convincing trade-off between image
quality and computational efficiency. However, these methods require separate
training for each restoration task (e.g., denoising, deblurring, demosaicing)
and problem condition (e.g., noise level of input images). This makes it
time-consuming and difficult to encompass all tasks and conditions during
training. In this paper, we propose a discriminative transfer learning method
that incorporates formal proximal optimization and discriminative learning for
general image restoration. The method requires a single-pass training and
allows for reuse across various problems and conditions while achieving an
efficiency comparable to previous discriminative approaches. Furthermore, after
being trained, our model can be easily transferred to new likelihood terms to
solve untrained tasks, or be combined with existing priors to further improve
image restoration quality
Post-Reconstruction Deconvolution of PET Images by Total Generalized Variation Regularization
Improving the quality of positron emission tomography (PET) images, affected
by low resolution and high level of noise, is a challenging task in nuclear
medicine and radiotherapy. This work proposes a restoration method, achieved
after tomographic reconstruction of the images and targeting clinical
situations where raw data are often not accessible. Based on inverse problem
methods, our contribution introduces the recently developed total generalized
variation (TGV) norm to regularize PET image deconvolution. Moreover, we
stabilize this procedure with additional image constraints such as positivity
and photometry invariance. A criterion for updating and adjusting automatically
the regularization parameter in case of Poisson noise is also presented.
Experiments are conducted on both synthetic data and real patient images.Comment: First published in the Proceedings of the 23rd European Signal
Processing Conference (EUSIPCO-2015) in 2015, published by EURASI
Learning to Jump: Thinning and Thickening Latent Counts for Generative Modeling
Learning to denoise has emerged as a prominent paradigm to design
state-of-the-art deep generative models for natural images. How to use it to
model the distributions of both continuous real-valued data and categorical
data has been well studied in recently proposed diffusion models. However, it
is found in this paper to have limited ability in modeling some other types of
data, such as count and non-negative continuous data, that are often highly
sparse, skewed, heavy-tailed, and/or overdispersed. To this end, we propose
learning to jump as a general recipe for generative modeling of various types
of data. Using a forward count thinning process to construct learning
objectives to train a deep neural network, it employs a reverse count
thickening process to iteratively refine its generation through that network.
We demonstrate when learning to jump is expected to perform comparably to
learning to denoise, and when it is expected to perform better. For example,
learning to jump is recommended when the training data is non-negative and
exhibits strong sparsity, skewness, heavy-tailedness, and/or heterogeneity.Comment: ICML 202
QuaSI: Quantile Sparse Image Prior for Spatio-Temporal Denoising of Retinal OCT Data
Optical coherence tomography (OCT) enables high-resolution and non-invasive
3D imaging of the human retina but is inherently impaired by speckle noise.
This paper introduces a spatio-temporal denoising algorithm for OCT data on a
B-scan level using a novel quantile sparse image (QuaSI) prior. To remove
speckle noise while preserving image structures of diagnostic relevance, we
implement our QuaSI prior via median filter regularization coupled with a Huber
data fidelity model in a variational approach. For efficient energy
minimization, we develop an alternating direction method of multipliers (ADMM)
scheme using a linearization of median filtering. Our spatio-temporal method
can handle both, denoising of single B-scans and temporally consecutive
B-scans, to gain volumetric OCT data with enhanced signal-to-noise ratio. Our
algorithm based on 4 B-scans only achieved comparable performance to averaging
13 B-scans and outperformed other current denoising methods.Comment: submitted to MICCAI'1
Sparse image reconstruction on the sphere: analysis and synthesis
We develop techniques to solve ill-posed inverse problems on the sphere by
sparse regularisation, exploiting sparsity in both axisymmetric and directional
scale-discretised wavelet space. Denoising, inpainting, and deconvolution
problems, and combinations thereof, are considered as examples. Inverse
problems are solved in both the analysis and synthesis settings, with a number
of different sampling schemes. The most effective approach is that with the
most restricted solution-space, which depends on the interplay between the
adopted sampling scheme, the selection of the analysis/synthesis problem, and
any weighting of the l1 norm appearing in the regularisation problem. More
efficient sampling schemes on the sphere improve reconstruction fidelity by
restricting the solution-space and also by improving sparsity in wavelet space.
We apply the technique to denoise Planck 353 GHz observations, improving the
ability to extract the structure of Galactic dust emission, which is important
for studying Galactic magnetism.Comment: 11 pages, 6 Figure
Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision
Denoising diffusion models are a powerful type of generative models used to
capture complex distributions of real-world signals. However, their
applicability is limited to scenarios where training samples are readily
available, which is not always the case in real-world applications. For
example, in inverse graphics, the goal is to generate samples from a
distribution of 3D scenes that align with a given image, but ground-truth 3D
scenes are unavailable and only 2D images are accessible. To address this
limitation, we propose a novel class of denoising diffusion probabilistic
models that learn to sample from distributions of signals that are never
directly observed. Instead, these signals are measured indirectly through a
known differentiable forward model, which produces partial observations of the
unknown signal. Our approach involves integrating the forward model directly
into the denoising process. This integration effectively connects the
generative modeling of observations with the generative modeling of the
underlying signals, allowing for end-to-end training of a conditional
generative model over signals. During inference, our approach enables sampling
from the distribution of underlying signals that are consistent with a given
partial observation. We demonstrate the effectiveness of our method on three
challenging computer vision tasks. For instance, in the context of inverse
graphics, our model enables direct sampling from the distribution of 3D scenes
that align with a single 2D input image.Comment: Project page: https://diffusion-with-forward-models.github.io
- …