23,051 research outputs found
An improved cosmological parameter inference scheme motivated by deep learning
Dark matter cannot be observed directly, but its weak gravitational lensing
slightly distorts the apparent shapes of background galaxies, making weak
lensing one of the most promising probes of cosmology. Several observational
studies have measured the effect, and there are currently running, and planned
efforts to provide even larger, and higher resolution weak lensing maps. Due to
nonlinearities on small scales, the traditional analysis with two-point
statistics does not fully capture all the underlying information. Multiple
inference methods were proposed to extract more details based on higher order
statistics, peak statistics, Minkowski functionals and recently convolutional
neural networks (CNN). Here we present an improved convolutional neural network
that gives significantly better estimates of and
cosmological parameters from simulated convergence maps than the state of art
methods and also is free of systematic bias. We show that the network exploits
information in the gradients around peaks, and with this insight, we construct
a new, easy-to-understand, and robust peak counting algorithm based on the
'steepness' of peaks, instead of their heights. The proposed scheme is even
more accurate than the neural network on high-resolution noiseless maps. With
shape noise and lower resolution its relative advantage deteriorates, but it
remains more accurate than peak counting
A variational Bayesian method for inverse problems with impulsive noise
We propose a novel numerical method for solving inverse problems subject to
impulsive noises which possibly contain a large number of outliers. The
approach is of Bayesian type, and it exploits a heavy-tailed t distribution for
data noise to achieve robustness with respect to outliers. A hierarchical model
with all hyper-parameters automatically determined from the given data is
described. An algorithm of variational type by minimizing the Kullback-Leibler
divergence between the true posteriori distribution and a separable
approximation is developed. The numerical method is illustrated on several one-
and two-dimensional linear and nonlinear inverse problems arising from heat
conduction, including estimating boundary temperature, heat flux and heat
transfer coefficient. The results show its robustness to outliers and the fast
and steady convergence of the algorithm.Comment: 20 pages, to appear in J. Comput. Phy
Robust Singular Smoothers For Tracking Using Low-Fidelity Data
Tracking underwater autonomous platforms is often difficult because of noisy,
biased, and discretized input data. Classic filters and smoothers based on
standard assumptions of Gaussian white noise break down when presented with any
of these challenges. Robust models (such as the Huber loss) and constraints
(e.g. maximum velocity) are used to attenuate these issues. Here, we consider
robust smoothing with singular covariance, which covers bias and correlated
noise, as well as many specific model types, such as those used in navigation.
In particular, we show how to combine singular covariance models with robust
losses and state-space constraints in a unified framework that can handle very
low-fidelity data. A noisy, biased, and discretized navigation dataset from a
submerged, low-cost inertial measurement unit (IMU) package, with ultra short
baseline (USBL) data for ground truth, provides an opportunity to stress-test
the proposed framework with promising results. We show how robust modeling
elements improve our ability to analyze the data, and present batch processing
results for 10 minutes of data with three different frequencies of available
USBL position fixes (gaps of 30 seconds, 1 minute, and 2 minutes). The results
suggest that the framework can be extended to real-time tracking using robust
windowed estimation.Comment: 9 pages, 9 figures, to be included in Robotics: Science and Systems
201
Lipschitz Optimisation for Lipschitz Interpolation
Techniques known as Nonlinear Set Membership prediction, Kinky Inference or
Lipschitz Interpolation are fast and numerically robust approaches to
nonparametric machine learning that have been proposed to be utilised in the
context of system identification and learning-based control. They utilise
presupposed Lipschitz properties in order to compute inferences over unobserved
function values. Unfortunately, most of these approaches rely on exact
knowledge about the input space metric as well as about the Lipschitz constant.
Furthermore, existing techniques to estimate the Lipschitz constants from the
data are not robust to noise or seem to be ad-hoc and typically are decoupled
from the ultimate learning and prediction task. To overcome these limitations,
we propose an approach for optimising parameters of the presupposed metrics by
minimising validation set prediction errors. To avoid poor performance due to
local minima, we propose to utilise Lipschitz properties of the optimisation
objective to ensure global optimisation success. The resulting approach is a
new flexible method for nonparametric black-box learning. We provide
experimental evidence of the competitiveness of our approach on artificial as
well as on real data
Statistical Mechanics of High-Dimensional Inference
To model modern large-scale datasets, we need efficient algorithms to infer a
set of unknown model parameters from noisy measurements. What are
fundamental limits on the accuracy of parameter inference, given finite
signal-to-noise ratios, limited measurements, prior information, and
computational tractability requirements? How can we combine prior information
with measurements to achieve these limits? Classical statistics gives incisive
answers to these questions as the measurement density . However, these classical results are not
relevant to modern high-dimensional inference problems, which instead occur at
finite . We formulate and analyze high-dimensional inference as a
problem in the statistical physics of quenched disorder. Our analysis uncovers
fundamental limits on the accuracy of inference in high dimensions, and reveals
that widely cherished inference algorithms like maximum likelihood (ML) and
maximum-a posteriori (MAP) inference cannot achieve these limits. We further
find optimal, computationally tractable algorithms that can achieve these
limits. Intriguingly, in high dimensions, these optimal algorithms become
computationally simpler than MAP and ML, while still outperforming them. For
example, such optimal algorithms can lead to as much as a 20% reduction in the
amount of data to achieve the same performance relative to MAP. Moreover, our
analysis reveals simple relations between optimal high dimensional inference
and low dimensional scalar Bayesian inference, insights into the nature of
generalization and predictive power in high dimensions, information theoretic
limits on compressed sensing, phase transitions in quadratic inference, and
connections to central mathematical objects in convex optimization theory and
random matrix theory.Comment: See http://ganguli-gang.stanford.edu/pdf/HighDimInf.Supp.pdf for
supplementary materia
Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation
In this paper, we present the optimization formulation of the Kalman
filtering and smoothing problems, and use this perspective to develop a variety
of extensions and applications. We first formulate classic Kalman smoothing as
a least squares problem, highlight special structure, and show that the classic
filtering and smoothing algorithms are equivalent to a particular algorithm for
solving this problem. Once this equivalence is established, we present
extensions of Kalman smoothing to systems with nonlinear process and
measurement models, systems with linear and nonlinear inequality constraints,
systems with outliers in the measurements or sudden changes in the state, and
systems where the sparsity of the state sequence must be accounted for. All
extensions preserve the computational efficiency of the classic algorithms, and
most of the extensions are illustrated with numerical examples, which are part
of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure
- …