60,315 research outputs found
Optimising Spatial and Tonal Data for PDE-based Inpainting
Some recent methods for lossy signal and image compression store only a few
selected pixels and fill in the missing structures by inpainting with a partial
differential equation (PDE). Suitable operators include the Laplacian, the
biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The
quality of such approaches depends substantially on the selection of the data
that is kept. Optimising this data in the domain and codomain gives rise to
challenging mathematical problems that shall be addressed in our work.
In the 1D case, we prove results that provide insights into the difficulty of
this problem, and we give evidence that a splitting into spatial and tonal
(i.e. function value) optimisation does hardly deteriorate the results. In the
2D setting, we present generic algorithms that achieve a high reconstruction
quality even if the specified data is very sparse. To optimise the spatial
data, we use a probabilistic sparsification, followed by a nonlocal pixel
exchange that avoids getting trapped in bad local optima. After this spatial
optimisation we perform a tonal optimisation that modifies the function values
in order to reduce the global reconstruction error. For homogeneous diffusion
inpainting, this comes down to a least squares problem for which we prove that
it has a unique solution. We demonstrate that it can be found efficiently with
a gradient descent approach that is accelerated with fast explicit diffusion
(FED) cycles. Our framework allows to specify the desired density of the
inpainting mask a priori. Moreover, is more generic than other data
optimisation approaches for the sparse inpainting problem, since it can also be
extended to nonlinear inpainting operators such as EED. This is exploited to
achieve reconstructions with state-of-the-art quality.
We also give an extensive literature survey on PDE-based image compression
methods
Neural View-Interpolation for Sparse Light Field Video
We suggest representing light field (LF) videos as "one-off" neural networks (NN), i.e., a learned mapping from view-plus-time coordinates to high-resolution color values, trained on sparse views. Initially, this sounds like a bad idea for three main reasons: First, a NN LF will likely have less quality than a same-sized pixel basis representation. Second, only few training data, e.g., 9 exemplars per frame are available for sparse LF videos. Third, there is no generalization across LFs, but across view and time instead. Consequently, a network needs to be trained for each LF video. Surprisingly, these problems can turn into substantial advantages: Other than the linear pixel basis, a NN has to come up with a compact, non-linear i.e., more intelligent, explanation of color, conditioned on the sparse view and time coordinates. As observed for many NN however, this representation now is interpolatable: if the image output for sparse view coordinates is plausible, it is for all intermediate, continuous coordinates as well. Our specific network architecture involves a differentiable occlusion-aware warping step, which leads to a compact set of trainable parameters and consequently fast learning and fast execution
Atmospheric PSF Interpolation for Weak Lensing in Short Exposure Imaging Data
A main science goal for the Large Synoptic Survey Telescope (LSST) is to
measure the cosmic shear signal from weak lensing to extreme accuracy. One
difficulty, however, is that with the short exposure time (15 seconds)
proposed, the spatial variation of the Point Spread Function (PSF) shapes may
be dominated by the atmosphere, in addition to optics errors. While optics
errors mainly cause the PSF to vary on angular scales similar or larger than a
single CCD sensor, the atmosphere generates stochastic structures on a wide
range of angular scales. It thus becomes a challenge to infer the multi-scale,
complex atmospheric PSF patterns by interpolating the sparsely sampled stars in
the field. In this paper we present a new method, PSFent, for interpolating the
PSF shape parameters, based on reconstructing underlying shape parameter maps
with a multi-scale maximum entropy algorithm. We demonstrate, using images from
the LSST Photon Simulator, the performance of our approach relative to a
5th-order polynomial fit (representing the current standard) and a simple
boxcar smoothing technique. Quantitatively, PSFent predicts more accurate PSF
models in all scenarios and the residual PSF errors are spatially less
correlated. This improvement in PSF interpolation leads to a factor of 3.5
lower systematic errors in the shear power spectrum on scales smaller than
, compared to polynomial fitting. We estimate that with PSFent and for
stellar densities greater than , the spurious shear
correlation from PSF interpolation, after combining a complete 10-year dataset
from LSST, is lower than the corresponding statistical uncertainties on the
cosmic shear power spectrum, even under a conservative scenario.Comment: 18 pages,12 figures, accepted by MNRA
Single image example-based super-resolution using cross-scale patch matching and Markov random field modelling
Example-based super-resolution has become increasingly popular over the last few years for its ability to overcome the limitations of classical multi-frame approach. In this paper we present a new example-based method that uses the input low-resolution image itself as a search space for high-resolution patches by exploiting self-similarity across different resolution scales. Found examples are combined in a high-resolution image by the means of Markov Random Field modelling that forces their global agreement. Additionally, we apply back-projection and steering kernel regression as post-processing techniques. In this way, we are able to produce sharp and artefact-free results that are comparable or better than standard interpolation and state-of-the-art super-resolution techniques
- …