1,214 research outputs found
Approximative Covariance Interpolation
Abstract-When methods of moments are used for identification of power spectral densities, a model is matched to estimated second order statistics such as, e.g., covariance estimates. If the estimates are good there is an infinite family of power spectra consistent with such an estimate and in applications, such as identification, we want to single out the most representative spectrum. We choose a prior spectral density to represent a priori information, and the spectrum closest to it in a given quasi-distance is determined. However, if the estimates are based on few data, or the model class considered is not consistent with the process considered, it may be necessary to use an approximative covariance interpolation. Two different types of regularizations are considered in this paper that can be applied on many covariance interpolation based estimation methods
Modeling and interpolation of the ambient magnetic field by Gaussian processes
Anomalies in the ambient magnetic field can be used as features in indoor
positioning and navigation. By using Maxwell's equations, we derive and present
a Bayesian non-parametric probabilistic modeling approach for interpolation and
extrapolation of the magnetic field. We model the magnetic field components
jointly by imposing a Gaussian process (GP) prior on the latent scalar
potential of the magnetic field. By rewriting the GP model in terms of a
Hilbert space representation, we circumvent the computational pitfalls
associated with GP modeling and provide a computationally efficient and
physically justified modeling tool for the ambient magnetic field. The model
allows for sequential updating of the estimate and time-dependent changes in
the magnetic field. The model is shown to work well in practice in different
applications: we demonstrate mapping of the magnetic field both with an
inexpensive Raspberry Pi powered robot and on foot using a standard smartphone.Comment: 17 pages, 12 figures, to appear in IEEE Transactions on Robotic
Improving self-calibration
Response calibration is the process of inferring how much the measured data
depend on the signal one is interested in. It is essential for any quantitative
signal estimation on the basis of the data. Here, we investigate
self-calibration methods for linear signal measurements and linear dependence
of the response on the calibration parameters. The common practice is to
augment an external calibration solution using a known reference signal with an
internal calibration on the unknown measurement signal itself. Contemporary
self-calibration schemes try to find a self-consistent solution for signal and
calibration by exploiting redundancies in the measurements. This can be
understood in terms of maximizing the joint probability of signal and
calibration. However, the full uncertainty structure of this joint probability
around its maximum is thereby not taken into account by these schemes.
Therefore better schemes -- in sense of minimal square error -- can be designed
by accounting for asymmetries in the uncertainty of signal and calibration. We
argue that at least a systematic correction of the common self-calibration
scheme should be applied in many measurement situations in order to properly
treat uncertainties of the signal on which one calibrates. Otherwise the
calibration solutions suffer from a systematic bias, which consequently
distorts the signal reconstruction. Furthermore, we argue that non-parametric,
signal-to-noise filtered calibration should provide more accurate
reconstructions than the common bin averages and provide a new, improved
self-calibration scheme. We illustrate our findings with a simplistic numerical
example.Comment: 17 pages, 3 figures, revised version, title change
Learning Rank Reduced Interpolation with Principal Component Analysis
In computer vision most iterative optimization algorithms, both sparse and
dense, rely on a coarse and reliable dense initialization to bootstrap their
optimization procedure. For example, dense optical flow algorithms profit
massively in speed and robustness if they are initialized well in the basin of
convergence of the used loss function. The same holds true for methods as
sparse feature tracking when initial flow or depth information for new features
at arbitrary positions is needed. This makes it extremely important to have
techniques at hand that allow to obtain from only very few available
measurements a dense but still approximative sketch of a desired 2D structure
(e.g. depth maps, optical flow, disparity maps, etc.). The 2D map is regarded
as sample from a 2D random process. The method presented here exploits the
complete information given by the principal component analysis (PCA) of that
process, the principal basis and its prior distribution. The method is able to
determine a dense reconstruction from sparse measurement. When facing
situations with only very sparse measurements, typically the number of
principal components is further reduced which results in a loss of
expressiveness of the basis. We overcome this problem and inject prior
knowledge in a maximum a posterior (MAP) approach. We test our approach on the
KITTI and the virtual KITTI datasets and focus on the interpolation of depth
maps for driving scenes. The evaluation of the results show good agreement to
the ground truth and are clearly better than results of interpolation by the
nearest neighbor method which disregards statistical information.Comment: Accepted at Intelligent Vehicles Symposium (IV), Los Angeles, USA,
June 201
Information field theory
Non-linear image reconstruction and signal analysis deal with complex inverse
problems. To tackle such problems in a systematic way, I present information
field theory (IFT) as a means of Bayesian, data based inference on spatially
distributed signal fields. IFT is a statistical field theory, which permits the
construction of optimal signal recovery algorithms even for non-linear and
non-Gaussian signal inference problems. IFT algorithms exploit spatial
correlations of the signal fields and benefit from techniques developed to
investigate quantum and statistical field theories, such as Feynman diagrams,
re-normalisation calculations, and thermodynamic potentials. The theory can be
used in many areas, and applications in cosmology and numerics are presented.Comment: 8 pages, in-a-nutshell introduction to information field theory (see
http://www.mpa-garching.mpg.de/ift), accepted for the proceedings of MaxEnt
2012, the 32nd International Workshop on Bayesian Inference and Maximum
Entropy Methods in Science and Engineerin
Extended Libor Market Models with Affine and Quadratic Volatility
The market model of interest rates specifies simple forward or Libor rates as lognormally distributed, their stochastic dynamics has a linear volatility function. In this paper, the model is extended to quadratic volatility functions which are the product of a quadratic polynomial and a level-independent covariance matrix. The extended Libor market models allow for closed form cap pricing formulae, the implied volatilities of the new formulae are smiles and frowns. We give examples for the possible shapes of implied volatilities. Furthermore, we derive a new approximative swaption pricing formula and discuss its properties. The model is calibrated to market prices, it turns out that no extended model specification outperforms the others. The criteria for model choice should thus be theoretical properties and computational efficiency.forward Libor rates, Libor market model, affine volatility, quadratic volatility, dervatives pricing, closed form solutions, LMM, BGM
- …