23,984 research outputs found
Interpolating point spread function anisotropy
Planned wide-field weak lensing surveys are expected to reduce the
statistical errors on the shear field to unprecedented levels. In contrast,
systematic errors like those induced by the convolution with the point spread
function (PSF) will not benefit from that scaling effect and will require very
accurate modeling and correction. While numerous methods have been devised to
carry out the PSF correction itself, modeling of the PSF shape and its spatial
variations across the instrument field of view has, so far, attracted much less
attention. This step is nevertheless crucial because the PSF is only known at
star positions while the correction has to be performed at any position on the
sky. A reliable interpolation scheme is therefore mandatory and a popular
approach has been to use low-order bivariate polynomials. In the present paper,
we evaluate four other classical spatial interpolation methods based on splines
(B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and
ordinary Kriging (OK). These methods are tested on the Star-challenge part of
the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and
are compared with the classical polynomial fitting (Polyfit). We also test all
our interpolation methods independently of the way the PSF is modeled, by
interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are
known exactly at star positions). We find in that case RBF to be the clear
winner, closely followed by the other local methods, IDW and OK. The global
methods, Polyfit and B-splines, are largely behind, especially in fields with
(ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all
interpolators reach a variance on PSF systematics better than
the upper bound expected by future space-based surveys, with
the local interpolators performing better than the global ones
Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging
Many graphics and vision problems can be expressed as non-linear least
squares optimizations of objective functions over visual data, such as images
and meshes. The mathematical descriptions of these functions are extremely
concise, but their implementation in real code is tedious, especially when
optimized for real-time performance on modern GPUs in interactive applications.
In this work, we propose a new language, Opt (available under
http://optlang.org), for writing these objective functions over image- or
graph-structured unknowns concisely and at a high level. Our compiler
automatically transforms these specifications into state-of-the-art GPU solvers
based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate
different variations of the solver, so users can easily explore tradeoffs in
numerical precision, matrix-free methods, and solver approaches. In our
results, we implement a variety of real-world graphics and vision applications.
Their energy functions are expressible in tens of lines of code, and produce
highly-optimized GPU solver implementations. These solver have performance
competitive with the best published hand-tuned, application-specific GPU
solvers, and orders of magnitude beyond a general-purpose auto-generated
solver
DCTM: Discrete-Continuous Transformation Matching for Semantic Flow
Techniques for dense semantic correspondence have provided limited ability to
deal with the geometric variations that commonly exist between semantically
similar images. While variations due to scale and rotation have been examined,
there lack practical solutions for more complex deformations such as affine
transformations because of the tremendous size of the associated solution
space. To address this problem, we present a discrete-continuous transformation
matching (DCTM) framework where dense affine transformation fields are inferred
through a discrete label optimization in which the labels are iteratively
updated via continuous regularization. In this way, our approach draws
solutions from the continuous space of affine transformations in a manner that
can be computed efficiently through constant-time edge-aware filtering and a
proposed affine-varying CNN-based descriptor. Experimental results show that
this model outperforms the state-of-the-art methods for dense semantic
correspondence on various benchmarks
Locally adaptive image denoising by a statistical multiresolution criterion
We demonstrate how one can choose the smoothing parameter in image denoising
by a statistical multiresolution criterion, both globally and locally. Using
inhomogeneous diffusion and total variation regularization as examples for
localized regularization schemes, we present an efficient method for locally
adaptive image denoising. As expected, the smoothing parameter serves as an
edge detector in this framework. Numerical examples illustrate the usefulness
of our approach. We also present an application in confocal microscopy
- …