12,821 research outputs found
Learning how to be robust: Deep polynomial regression
Polynomial regression is a recurrent problem with a large number of
applications. In computer vision it often appears in motion analysis. Whatever
the application, standard methods for regression of polynomial models tend to
deliver biased results when the input data is heavily contaminated by outliers.
Moreover, the problem is even harder when outliers have strong structure.
Departing from problem-tailored heuristics for robust estimation of parametric
models, we explore deep convolutional neural networks. Our work aims to find a
generic approach for training deep regression models without the explicit need
of supervised annotation. We bypass the need for a tailored loss function on
the regression parameters by attaching to our model a differentiable hard-wired
decoder corresponding to the polynomial operation at hand. We demonstrate the
value of our findings by comparing with standard robust regression methods.
Furthermore, we demonstrate how to use such models for a real computer vision
problem, i.e., video stabilization. The qualitative and quantitative
experiments show that neural networks are able to learn robustness for general
polynomial regression, with results that well overpass scores of traditional
robust estimation methods.Comment: 18 pages, conferenc
Parametric high resolution techniques for radio astronomical imaging
The increased sensitivity of future radio telescopes will result in
requirements for higher dynamic range within the image as well as better
resolution and immunity to interference. In this paper we propose a new matrix
formulation of the imaging equation in the cases of non co-planar arrays and
polarimetric measurements. Then we improve our parametric imaging techniques in
terms of resolution and estimation accuracy. This is done by enhancing both the
MVDR parametric imaging, introducing alternative dirty images and by
introducing better power estimates based on least squares, with positive
semi-definite constraints. We also discuss the use of robust Capon beamforming
and semi-definite programming for solving the self-calibration problem.
Additionally we provide statistical analysis of the bias of the MVDR beamformer
for the case of moving array, which serves as a first step in analyzing
iterative approaches such as CLEAN and the techniques proposed in this paper.
Finally we demonstrate a full deconvolution process based on the parametric
imaging techniques and show its improved resolution and sensitivity compared to
the CLEAN method.Comment: To appear in IEEE Journal of Selected Topics in Signal Processing,
Special issue on Signal Processing for Astronomy and space research. 30 page
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
A general framework for solving image inverse problems is introduced in this
paper. The approach is based on Gaussian mixture models, estimated via a
computationally efficient MAP-EM algorithm. A dual mathematical interpretation
of the proposed framework with structured sparse estimation is described, which
shows that the resulting piecewise linear estimate stabilizes the estimation
when compared to traditional sparse inverse problem techniques. This
interpretation also suggests an effective dictionary motivated initialization
for the MAP-EM algorithm. We demonstrate that in a number of image inverse
problems, including inpainting, zooming, and deblurring, the same algorithm
produces either equal, often significantly better, or very small margin worse
results than the best published ones, at a lower computational cost.Comment: 30 page
Stable Feature Selection from Brain sMRI
Neuroimage analysis usually involves learning thousands or even millions of
variables using only a limited number of samples. In this regard, sparse
models, e.g. the lasso, are applied to select the optimal features and achieve
high diagnosis accuracy. The lasso, however, usually results in independent
unstable features. Stability, a manifest of reproducibility of statistical
results subject to reasonable perturbations to data and the model, is an
important focus in statistics, especially in the analysis of high dimensional
data. In this paper, we explore a nonnegative generalized fused lasso model for
stable feature selection in the diagnosis of Alzheimer's disease. In addition
to sparsity, our model incorporates two important pathological priors: the
spatial cohesion of lesion voxels and the positive correlation between the
features and the disease labels. To optimize the model, we propose an efficient
algorithm by proving a novel link between total variation and fast network flow
algorithms via conic duality. Experiments show that the proposed nonnegative
model performs much better in exploring the intrinsic structure of data via
selecting stable features compared with other state-of-the-arts
Adaptive stochastic Galerkin FEM for lognormal coefficients in hierarchical tensor representations
Stochastic Galerkin methods for non-affine coefficient representations are
known to cause major difficulties from theoretical and numerical points of
view. In this work, an adaptive Galerkin FE method for linear parametric PDEs
with lognormal coefficients discretized in Hermite chaos polynomials is
derived. It employs problem-adapted function spaces to ensure solvability of
the variational formulation. The inherently high computational complexity of
the parametric operator is made tractable by using hierarchical tensor
representations. For this, a new tensor train format of the lognormal
coefficient is derived and verified numerically. The central novelty is the
derivation of a reliable residual-based a posteriori error estimator. This can
be regarded as a unique feature of stochastic Galerkin methods. It allows for
an adaptive algorithm to steer the refinements of the physical mesh and the
anisotropic Wiener chaos polynomial degrees. For the evaluation of the error
estimator to become feasible, a numerically efficient tensor format
discretization is developed. Benchmark examples with unbounded lognormal
coefficient fields illustrate the performance of the proposed Galerkin
discretization and the fully adaptive algorithm
On boosting kernel regression
In this paper we propose a simple multistep regression smoother which is constructed in an iterative manner, by learning the Nadaraya-Watson estimator with L-2 boosting. We find, in both theoretical analysis and simulation experiments, that the bias converges exponentially fast. and the variance diverges exponentially slow. The first boosting step is analysed in more detail, giving asymptotic expressions as functions of the smoothing parameter, and relationships with previous work are explored. Practical performance is illustrated by both simulated and real data
- …