1,157 research outputs found

    Wiener filter reloaded: fast signal reconstruction without preconditioning

    Full text link
    We present a high performance solution to the Wiener filtering problem via a formulation that is dual to the recently developed messenger technique. This new dual messenger algorithm, like its predecessor, efficiently calculates the Wiener filter solution of large and complex data sets without preconditioning and can account for inhomogeneous noise distributions and arbitrary mask geometries. We demonstrate the capabilities of this scheme in signal reconstruction by applying it on a simulated cosmic microwave background (CMB) temperature data set. The performance of this new method is compared to that of the standard messenger algorithm and the preconditioned conjugate gradient (PCG) approach, using a series of well-known convergence diagnostics and their processing times, for the particular problem under consideration. This variant of the messenger algorithm matches the performance of the PCG method in terms of the effectiveness of reconstruction of the input angular power spectrum and converges smoothly to the final solution. The dual messenger algorithm outperforms the standard messenger and PCG methods in terms of execution time, as it runs to completion around 2 and 3-4 times faster than the respective methods, for the specific problem considered.Comment: 13 pages, 10 figures. Accepted for publication in MNRAS main journa

    Stochastic approximation of score functions for Gaussian processes

    Full text link
    We discuss the statistical properties of a recently introduced unbiased stochastic approximation to the score equations for maximum likelihood calculation for Gaussian processes. Under certain conditions, including bounded condition number of the covariance matrix, the approach achieves O(n)O(n) storage and nearly O(n)O(n) computational effort per optimization step, where nn is the number of data sites. Here, we prove that if the condition number of the covariance matrix is bounded, then the approximate score equations are nearly optimal in a well-defined sense. Therefore, not only is the approximation efficient to compute, but it also has comparable statistical properties to the exact maximum likelihood estimates. We discuss a modification of the stochastic approximation in which design elements of the stochastic terms mimic patterns from a 2n2^n factorial design. We prove these designs are always at least as good as the unstructured design, and we demonstrate through simulation that they can produce a substantial improvement over random designs. Our findings are validated by numerical experiments on simulated data sets of up to 1 million observations. We apply the approach to fit a space-time model to over 80,000 observations of total column ozone contained in the latitude band 40∘40^{\circ}-50∘50^{\circ}N during April 2012.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS627 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Efficient Wiener filtering without preconditioning

    Full text link
    We present a new approach to calculate the Wiener filter solution of general data sets. It is trivial to implement, flexible, numerically absolutely stable, and guaranteed to converge. Most importantly, it does not require an ingenious choice of preconditioner to work well. The method is capable of taking into account inhomogeneous noise distributions and arbitrary mask geometries. It iteratively builds up the signal reconstruction by means of a messenger field, introduced to mediate between the different preferred bases in which signal and noise properties can be specified most conveniently. Using cosmic microwave background (CMB) radiation data as a showcase, we demonstrate the capabilities of our scheme by computing Wiener filtered WMAP7 temperature and polarization maps at full resolution for the first time. We show how the algorithm can be modified to synthesize fluctuation maps, which, combined with the Wiener filter solution, result in unbiased constrained signal realizations, consistent with the observations. The algorithm performs well even on simulated CMB maps with Planck resolution and dynamic range.Comment: 5 pages, 2 figures. Submitted to Astronomy and Astrophysics. Replaced to match published versio

    Bayesian reconstruction of the cosmological large-scale structure: methodology, inverse algorithms and numerical optimization

    Full text link
    We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood, and numerical inverse extra-regularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener-filtering, Tikhonov regularization, Ridge regression, Maximum Entropy, and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton-Raphson, Landweber-Fridman, and both linear and non-linear Krylov methods based on Fletcher-Reeves, Polak-Ribiere, and Hestenes-Stiefel Conjugate Gradients. The structures of the up-to-date highest-performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener-filter in the novel ARGO-software package, the different numerical schemes are benchmarked with 1-, 2-, and 3-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark-matter density field, the peculiar velocity field, and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.Comment: 40 pages, 11 figure

    An error estimate of Gaussian Recursive Filter in 3Dvar problem

    Full text link
    Computational kernel of the three-dimensional variational data assimilation (3D-Var) problem is a linear system, generally solved by means of an iterative method. The most costly part of each iterative step is a matrix-vector product with a very large covariance matrix having Gaussian correlation structure. This operation may be interpreted as a Gaussian convolution, that is a very expensive numerical kernel. Recursive Filters (RFs) are a well known way to approximate the Gaussian convolution and are intensively applied in the meteorology, in the oceanography and in forecast models. In this paper, we deal with an oceanographic 3D-Var data assimilation scheme, named OceanVar, where the linear system is solved by using the Conjugate Gradient (GC) method by replacing, at each step, the Gaussian convolution with RFs. Here we give theoretical issues on the discrete convolution approximation with a first order (1st-RF) and a third order (3rd-RF) recursive filters. Numerical experiments confirm given error bounds and show the benefits, in terms of accuracy and performance, of the 3-rd RF.Comment: 9 page

    Preconditioning Kernel Matrices

    Full text link
    The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget
    • 

    corecore