21,470 research outputs found

    Fixed point algorithms for estimating power means of positive definite matrices

    Get PDF
    Estimating means of data points lying on the Riemannian manifold of symmetric positive-definite (SPD) matrices has proved of great utility in applications requiring interpolation, extrapolation, smoothing, signal detection, and classification. The power means of SPD matrices with exponent p in the interval [-1, 1] interpolate in between the Harmonic mean (p = -1) and the Arithmetic mean (p = 1), while the Geometric (Cartan/Karcher) mean, which is the one currently employed in most applications, corresponds to their limit evaluated at 0. In this paper, we treat the problem of estimating power means along the continuum p ϔ (-1, 1) given noisy observed measurement. We provide a general fixed point algorithm (MPM) and we show that its convergence rate for p = ±0.5 deteriorates very little with the number and dimension of points given as input. Along the whole continuum, MPM is also robust with respect to the dispersion of the points on the manifold (noise), much more than the gradient descent algorithm usually employed to estimate the geometric mean. Thus, MPM is an efficient algorithm for the whole family of power means, including the geometric mean, which by MPM can be approximated with a desired precision by interpolating two solutions obtained with a small ±p value. We also present an approximated version of the MPM algorithm with very low computational complexity for the special case p = ±œ. Finally, we show the appeal of power means through the classification of brain-computer interface event-related potentials data

    Stochastic approximation of score functions for Gaussian processes

    Full text link
    We discuss the statistical properties of a recently introduced unbiased stochastic approximation to the score equations for maximum likelihood calculation for Gaussian processes. Under certain conditions, including bounded condition number of the covariance matrix, the approach achieves O(n)O(n) storage and nearly O(n)O(n) computational effort per optimization step, where nn is the number of data sites. Here, we prove that if the condition number of the covariance matrix is bounded, then the approximate score equations are nearly optimal in a well-defined sense. Therefore, not only is the approximation efficient to compute, but it also has comparable statistical properties to the exact maximum likelihood estimates. We discuss a modification of the stochastic approximation in which design elements of the stochastic terms mimic patterns from a 2n2^n factorial design. We prove these designs are always at least as good as the unstructured design, and we demonstrate through simulation that they can produce a substantial improvement over random designs. Our findings are validated by numerical experiments on simulated data sets of up to 1 million observations. We apply the approach to fit a space-time model to over 80,000 observations of total column ozone contained in the latitude band 40∘40^{\circ}-50∘50^{\circ}N during April 2012.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS627 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Diagonality Measures of Hermitian Positive-Definite Matrices with Application to the Approximate Joint Diagonalization Problem

    Full text link
    In this paper, we introduce properly-invariant diagonality measures of Hermitian positive-definite matrices. These diagonality measures are defined as distances or divergences between a given positive-definite matrix and its diagonal part. We then give closed-form expressions of these diagonality measures and discuss their invariance properties. The diagonality measure based on the log-determinant α\alpha-divergence is general enough as it includes a diagonality criterion used by the signal processing community as a special case. These diagonality measures are then used to formulate minimization problems for finding the approximate joint diagonalizer of a given set of Hermitian positive-definite matrices. Numerical computations based on a modified Newton method are presented and commented

    A globally convergent matricial algorithm for multivariate spectral estimation

    Full text link
    In this paper, we first describe a matricial Newton-type algorithm designed to solve the multivariable spectrum approximation problem. We then prove its global convergence. Finally, we apply this approximation procedure to multivariate spectral estimation, and test its effectiveness through simulation. Simulation shows that, in the case of short observation records, this method may provide a valid alternative to standard multivariable identification techniques such as MATLAB's PEM and MATLAB's N4SID

    Including parameter dependence in the data and covariance for cosmological inference

    Full text link
    The final step of most large-scale structure analyses involves the comparison of power spectra or correlation functions to theoretical models. It is clear that the theoretical models have parameter dependence, but frequently the measurements and the covariance matrix depend upon some of the parameters as well. We show that a very simple interpolation scheme from an unstructured mesh allows for an efficient way to include this parameter dependence self-consistently in the analysis at modest computational expense. We describe two schemes for covariance matrices. The scheme which uses the geometric structure of such matrices performs roughly twice as well as the simplest scheme, though both perform very well.Comment: 17 pages, 4 figures, matches version published in JCA

    Differential fast fixed-point algorithms for underdetermined instantaneous and convolutive partial blind source separation

    Full text link
    This paper concerns underdetermined linear instantaneous and convolutive blind source separation (BSS), i.e., the case when the number of observed mixed signals is lower than the number of sources.We propose partial BSS methods, which separate supposedly nonstationary sources of interest (while keeping residual components for the other, supposedly stationary, "noise" sources). These methods are based on the general differential BSS concept that we introduced before. In the instantaneous case, the approach proposed in this paper consists of a differential extension of the FastICA method (which does not apply to underdetermined mixtures). In the convolutive case, we extend our recent time-domain fast fixed-point C-FICA algorithm to underdetermined mixtures. Both proposed approaches thus keep the attractive features of the FastICA and C-FICA methods. Our approaches are based on differential sphering processes, followed by the optimization of the differential nonnormalized kurtosis that we introduce in this paper. Experimental tests show that these differential algorithms are much more robust to noise sources than the standard FastICA and C-FICA algorithms.Comment: this paper describes our differential FastICA-like algorithms for linear instantaneous and convolutive underdetermined mixture

    Moments of spectral functions: Monte Carlo evaluation and verification

    Full text link
    The subject of the present study is the Monte Carlo path-integral evaluation of the moments of spectral functions. Such moments can be computed by formal differentiation of certain estimating functionals that are infinitely-differentiable against time whenever the potential function is arbitrarily smooth. Here, I demonstrate that the numerical differentiation of the estimating functionals can be more successfully implemented by means of pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial interpolant), which utilize information from the entire interval (−ÎČℏ/2,ÎČℏ/2)(-\beta \hbar / 2, \beta \hbar/2). The algorithmic detail that leads to robust numerical approximations is the fact that the path integral action and not the actual estimating functional are interpolated. Although the resulting approximation to the estimating functional is non-linear, the derivatives can be computed from it in a fast and stable way by contour integration in the complex plane, with the help of the Cauchy integral formula (e.g., by Lyness' method). An interesting aspect of the present development is that Hamburger's conditions for a finite sequence of numbers to be a moment sequence provide the necessary and sufficient criteria for the computed data to be compatible with the existence of an inversion algorithm. Finally, the issue of appearance of the sign problem in the computation of moments, albeit in a milder form than for other quantities, is addressed.Comment: 13 pages, 2 figure
    • 

    corecore