110,468 research outputs found
On estimation of the diagonal elements of a sparse precision matrix
In this paper, we present several estimators of the diagonal elements of the
inverse of the covariance matrix, called precision matrix, of a sample of iid
random vectors. The focus is on high dimensional vectors having a sparse
precision matrix. It is now well understood that when the underlying
distribution is Gaussian, the columns of the precision matrix can be estimated
independently form one another by solving linear regression problems under
sparsity constraints. This approach leads to a computationally efficient
strategy for estimating the precision matrix that starts by estimating the
regression vectors, then estimates the diagonal entries of the precision matrix
and, in a final step, combines these estimators for getting estimators of the
off-diagonal entries. While the step of estimating the regression vector has
been intensively studied over the past decade, the problem of deriving
statistically accurate estimators of the diagonal entries has received much
less attention. The goal of the present paper is to fill this gap by presenting
four estimators---that seem the most natural ones---of the diagonal entries of
the precision matrix and then performing a comprehensive empirical evaluation
of these estimators. The estimators under consideration are the residual
variance, the relaxed maximum likelihood, the symmetry-enforced maximum
likelihood and the penalized maximum likelihood. We show, both theoretically
and empirically, that when the aforementioned regression vectors are estimated
without error, the symmetry-enforced maximum likelihood estimator has the
smallest estimation error. However, in a more realistic setting when the
regression vector is estimated by a sparsity-favoring computationally efficient
method, the qualities of the estimators become relatively comparable with a
slight advantage for the residual variance estimator.Comment: Companion R package at
http://cran.r-project.org/web/packages/DESP/index.htm
Decorrelating the Power Spectrum of Galaxies
It is shown how to decorrelate the (prewhitened) power spectrum measured from
a galaxy survey into a set of high resolution uncorrelated band-powers. The
treatment includes nonlinearity, but not redshift distortions. Amongst the
infinitely many possible decorrelation matrices, the square root of the Fisher
matrix, or a scaled version thereof, offers a particularly good choice, in the
sense that the band-power windows are narrow, approximately symmetric, and
well-behaved in the presence of noise. We use this method to compute band-power
windows for, and the information content of, the Sloan Digital Sky Survey, the
Las Campanas Redshift Survey, and the IRAS 1.2 Jy Survey.Comment: 11 pages, including 8 embedded PostScript figures. Minor changes to
agree with published versio
A "Square-root" Method for the Density Matrix and its Applications to Lindblad Operators
The evolution of open systems, subject to both Hamiltonian and dissipative
forces, is studied by writing the element of the time () dependent
density matrix in the form \ber \rho_{nm}(t)&=& \frac {1}{A} \sum_{\alpha=1}^A
\gamma ^{\alpha}_n (t)\gamma^{\alpha *}_m (t) \enr The so called "square root
factors", the 's, are non-square matrices and are averaged over
systems () of the ensemble. This square-root description is exact.
Evolution equations are then postulated for the factors, such as to
reduce to the Lindblad-type evolution equations for the diagonal terms in the
density matrix. For the off-diagonal terms they differ from the
Lindblad-equations. The "square root factors" are not unique and
the equations for the 's depend on the specific representation
chosen. Two criteria can be suggested for fixing the choice of 's
one is simplicity of the resulting equations and the other has to do with the
reduction of the difference between the formalism and the
Lindblad-equations.Comment: 36 pages, 7 figure
A parallel Kalman filter via the square root Kalman filtering
A parallel algorithm for Kalman filtering with contaminated observations is developed. Theı parallel implementation is based on the square root version of the Kalman filter (see [3]). Thisı represents a great improvement over serial implementations reducing drastically computationalı costs for each state update
- …