21,541 research outputs found

    A variational Bayesian method for inverse problems with impulsive noise

    Full text link
    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm.Comment: 20 pages, to appear in J. Comput. Phy

    Robust Singular Smoothers For Tracking Using Low-Fidelity Data

    Full text link
    Tracking underwater autonomous platforms is often difficult because of noisy, biased, and discretized input data. Classic filters and smoothers based on standard assumptions of Gaussian white noise break down when presented with any of these challenges. Robust models (such as the Huber loss) and constraints (e.g. maximum velocity) are used to attenuate these issues. Here, we consider robust smoothing with singular covariance, which covers bias and correlated noise, as well as many specific model types, such as those used in navigation. In particular, we show how to combine singular covariance models with robust losses and state-space constraints in a unified framework that can handle very low-fidelity data. A noisy, biased, and discretized navigation dataset from a submerged, low-cost inertial measurement unit (IMU) package, with ultra short baseline (USBL) data for ground truth, provides an opportunity to stress-test the proposed framework with promising results. We show how robust modeling elements improve our ability to analyze the data, and present batch processing results for 10 minutes of data with three different frequencies of available USBL position fixes (gaps of 30 seconds, 1 minute, and 2 minutes). The results suggest that the framework can be extended to real-time tracking using robust windowed estimation.Comment: 9 pages, 9 figures, to be included in Robotics: Science and Systems 201

    Smart Power Grid Synchronization With Fault Tolerant Nonlinear Estimation

    Get PDF
    Effective real-time state estimation is essential for smart grid synchronization, as electricity demand continues to grow, and renewable energy resources increase their penetration into the grid. In order to provide a more reliable state estimation technique to address the problem of bad data in the PMU-based power synchronization, this paper presents a novel nonlinear estimation framework to dynamically track frequency, voltage magnitudes and phase angles. Instead of directly analyzing in abc coordinate frame, symmetrical component transformation is employed to separate the positive, negative, and zero sequence networks. Then, Clarke\u27s transformation is used to transform the sequence networks into the αβ stationary coordinate frame, which leads to system model formulation. A novel fault tolerant extended Kalman filter based real-time estimation framework is proposed for smart grid synchronization with noisy bad data measurements. Computer simulation studies have demonstrated that the proposed fault tolerant extended Kalman filter (FTEKF) provides more accurate voltage synchronization results than the extended Kalman filter (EKF). The proposed approach has been implemented with dSPACE DS1103 and National Instruments CompactRIO hardware platforms. Computer simulation and hardware instrumentation results have shown the potential applications of FTEKF in smart grid synchronization

    Robust Stereo Visual Odometry through a Probabilistic Combination of Points and Line Segments

    Get PDF
    Most approaches to stereo visual odometry reconstruct the motion based on the tracking of point features along a sequence of images. However, in low-textured scenes it is often difficult to encounter a large set of point features, or it may happen that they are not well distributed over the image, so that the behavior of these algorithms deteriorates. This paper proposes a probabilistic approach to stereo visual odometry based on the combination of both point and line segment that works robustly in a wide variety of scenarios. The camera motion is recovered through non-linear minimization of the projection errors of both point and line segment features. In order to effectively combine both types of features, their associated errors are weighted according to their covariance matrices, computed from the propagation of Gaussian distribution errors in the sensor measurements. The method, of course, is computationally more expensive that using only one type of feature, but still can run in real-time on a standard computer and provides interesting advantages, including a straightforward integration into any probabilistic framework commonly employed in mobile robotics.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Project "PROMOVE: Advances in mobile robotics for promoting independent life of elders", funded by the Spanish Government and the "European Regional Development Fund ERDF" under contract DPI2014-55826-R

    Variational Downscaling, Fusion and Assimilation of Hydrometeorological States via Regularized Estimation

    Full text link
    Improved estimation of hydrometeorological states from down-sampled observations and background model forecasts in a noisy environment, has been a subject of growing research in the past decades. Here, we introduce a unified framework that ties together the problems of downscaling, data fusion and data assimilation as ill-posed inverse problems. This framework seeks solutions beyond the classic least squares estimation paradigms by imposing proper regularization, which are constraints consistent with the degree of smoothness and probabilistic structure of the underlying state. We review relevant regularization methods in derivative space and extend classic formulations of the aforementioned problems with particular emphasis on hydrologic and atmospheric applications. Informed by the statistical characteristics of the state variable of interest, the central results of the paper suggest that proper regularization can lead to a more accurate and stable recovery of the true state and hence more skillful forecasts. In particular, using the Tikhonov and Huber regularization in the derivative space, the promise of the proposed framework is demonstrated in static downscaling and fusion of synthetic multi-sensor precipitation data, while a data assimilation numerical experiment is presented using the heat equation in a variational setting

    Cram\'er-Rao bounds for synchronization of rotations

    Full text link
    Synchronization of rotations is the problem of estimating a set of rotations R_i in SO(n), i = 1, ..., N, based on noisy measurements of relative rotations R_i R_j^T. This fundamental problem has found many recent applications, most importantly in structural biology. We provide a framework to study synchronization as estimation on Riemannian manifolds for arbitrary n under a large family of noise models. The noise models we address encompass zero-mean isotropic noise, and we develop tools for Gaussian-like as well as heavy-tail types of noise in particular. As a main contribution, we derive the Cram\'er-Rao bounds of synchronization, that is, lower-bounds on the variance of unbiased estimators. We find that these bounds are structured by the pseudoinverse of the measurement graph Laplacian, where edge weights are proportional to measurement quality. We leverage this to provide interpretation in terms of random walks and visualization tools for these bounds in both the anchored and anchor-free scenarios. Similar bounds previously established were limited to rotations in the plane and Gaussian-like noise

    An improved cosmological parameter inference scheme motivated by deep learning

    Get PDF
    Dark matter cannot be observed directly, but its weak gravitational lensing slightly distorts the apparent shapes of background galaxies, making weak lensing one of the most promising probes of cosmology. Several observational studies have measured the effect, and there are currently running, and planned efforts to provide even larger, and higher resolution weak lensing maps. Due to nonlinearities on small scales, the traditional analysis with two-point statistics does not fully capture all the underlying information. Multiple inference methods were proposed to extract more details based on higher order statistics, peak statistics, Minkowski functionals and recently convolutional neural networks (CNN). Here we present an improved convolutional neural network that gives significantly better estimates of Ωm\Omega_m and σ8\sigma_8 cosmological parameters from simulated convergence maps than the state of art methods and also is free of systematic bias. We show that the network exploits information in the gradients around peaks, and with this insight, we construct a new, easy-to-understand, and robust peak counting algorithm based on the 'steepness' of peaks, instead of their heights. The proposed scheme is even more accurate than the neural network on high-resolution noiseless maps. With shape noise and lower resolution its relative advantage deteriorates, but it remains more accurate than peak counting

    Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    Full text link
    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multi-probe) analyses of the large scale structure of the universe. Analytically computed covariances are noise-free and hence straightforward to invert, however the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best fit values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the true precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C=A+B\smash{\mathbf{C} = \mathbf{A}+\mathbf{B}}, where A\smash{\mathbf{A}} is well understood analytically and can be turned off in simulations (e.g. shape-noise for cosmic shear) to yield a direct estimate of B\smash{\mathbf{B}}. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telecope (LSST). For DES we find that 400400 N-body simulations are sufficient to achive negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 24002400 simulations. The standard covariance estimator would require >10510^5 simulations to reach a similar precision. We extend our analysis to a DES multi-probe case finding a similar performance.Comment: 14 pages, submitted to mnra
    corecore