107,914 research outputs found

    On estimation of the diagonal elements of a sparse precision matrix

    Full text link
    In this paper, we present several estimators of the diagonal elements of the inverse of the covariance matrix, called precision matrix, of a sample of iid random vectors. The focus is on high dimensional vectors having a sparse precision matrix. It is now well understood that when the underlying distribution is Gaussian, the columns of the precision matrix can be estimated independently form one another by solving linear regression problems under sparsity constraints. This approach leads to a computationally efficient strategy for estimating the precision matrix that starts by estimating the regression vectors, then estimates the diagonal entries of the precision matrix and, in a final step, combines these estimators for getting estimators of the off-diagonal entries. While the step of estimating the regression vector has been intensively studied over the past decade, the problem of deriving statistically accurate estimators of the diagonal entries has received much less attention. The goal of the present paper is to fill this gap by presenting four estimators---that seem the most natural ones---of the diagonal entries of the precision matrix and then performing a comprehensive empirical evaluation of these estimators. The estimators under consideration are the residual variance, the relaxed maximum likelihood, the symmetry-enforced maximum likelihood and the penalized maximum likelihood. We show, both theoretically and empirically, that when the aforementioned regression vectors are estimated without error, the symmetry-enforced maximum likelihood estimator has the smallest estimation error. However, in a more realistic setting when the regression vector is estimated by a sparsity-favoring computationally efficient method, the qualities of the estimators become relatively comparable with a slight advantage for the residual variance estimator.Comment: Companion R package at http://cran.r-project.org/web/packages/DESP/index.htm

    Decorrelating the Power Spectrum of Galaxies

    Get PDF
    It is shown how to decorrelate the (prewhitened) power spectrum measured from a galaxy survey into a set of high resolution uncorrelated band-powers. The treatment includes nonlinearity, but not redshift distortions. Amongst the infinitely many possible decorrelation matrices, the square root of the Fisher matrix, or a scaled version thereof, offers a particularly good choice, in the sense that the band-power windows are narrow, approximately symmetric, and well-behaved in the presence of noise. We use this method to compute band-power windows for, and the information content of, the Sloan Digital Sky Survey, the Las Campanas Redshift Survey, and the IRAS 1.2 Jy Survey.Comment: 11 pages, including 8 embedded PostScript figures. Minor changes to agree with published versio

    A "Square-root" Method for the Density Matrix and its Applications to Lindblad Operators

    Full text link
    The evolution of open systems, subject to both Hamiltonian and dissipative forces, is studied by writing the nmnm element of the time (tt) dependent density matrix in the form \ber \rho_{nm}(t)&=& \frac {1}{A} \sum_{\alpha=1}^A \gamma ^{\alpha}_n (t)\gamma^{\alpha *}_m (t) \enr The so called "square root factors", the γ(t)\gamma(t)'s, are non-square matrices and are averaged over AA systems (α\alpha) of the ensemble. This square-root description is exact. Evolution equations are then postulated for the γ(t)\gamma(t) factors, such as to reduce to the Lindblad-type evolution equations for the diagonal terms in the density matrix. For the off-diagonal terms they differ from the Lindblad-equations. The "square root factors" γ(t)\gamma(t) are not unique and the equations for the γ(t)\gamma(t)'s depend on the specific representation chosen. Two criteria can be suggested for fixing the choice of γ(t)\gamma(t)'s one is simplicity of the resulting equations and the other has to do with the reduction of the difference between the γ(t)\gamma(t) formalism and the Lindblad-equations.Comment: 36 pages, 7 figure

    A parallel Kalman filter via the square root Kalman filtering

    Get PDF
    A parallel algorithm for Kalman filtering with contaminated observations is developed. Theı parallel implementation is based on the square root version of the Kalman filter (see [3]). Thisı represents a great improvement over serial implementations reducing drastically computationalı costs for each state update

    Salem numbers and arithmetic hyperbolic groups

    Get PDF
    In this paper we prove that there is a direct relationship between Salem numbers and translation lengths of hyperbolic elements of arithmetic hyperbolic groups that are determined by a quadratic form over a totally real number field. As an application we determine a sharp lower bound for the length of a closed geodesic in a noncompact arithmetic hyperbolic n-orbifold for each dimension n. We also discuss a "short geodesic conjecture", and prove its equivalence with "Lehmer's conjecture" for Salem numbers.Comment: The exposition in version 3 is more compact; this shortens the paper: 26 pages now instead of 37. A discussion on Lehmer's problem has been added in Section 1.2. Final version, to appear is Trans. AM
    corecore