68,201 research outputs found
Rank Test Based On Matrix Perturbation Theory
In this paper, we propose methods of the determination of the rank of matrix. We consider a rank test for an unobserved matrix for which an estimate exists having normal asymptotic distribution of order N1/2 where N is the sample size. The test statistic is based on the smallest estimated singular values. Using Matrix Perturbation Theory, the smallest singular values of random matrix converge asymptotically to zero in the order O(N-1) and the corresponding left and right singular vectors converge asymptotically in the order O(N-1/2). Moreover, the asymptotic distribution of the test statistic is seen to be chi-squared. The test has advantages over standard tests in being easier to compute. Two approaches are be considered sequential testing strategy and information theoretic criterion. We establish a strongly consistent of the determination of the rank of matrix using both the two approaches. Some economic applications are discussed and simulation evidence is given for this test. Its performance is compared to that of the LDU rank tests of Gill and Lewbel (1992) and Cragg and Donald (1996).Rank Testing; Matrix Perturbation Theory; Rank Estimation; Singular Value Decomposition; Sequential Testing Procedure; Information Theoretic Criterion.
On the optimality and sharpness of Laguerre's lower bound on the smallest eigenvalue of a symmetric positive definite matrix
summary:Lower bounds on the smallest eigenvalue of a symmetric positive definite matrix play an important role in condition number estimation and in iterative methods for singular value computation. In particular, the bounds based on and have attracted attention recently, because they can be computed in operations when is tridiagonal. In this paper, we focus on these bounds and investigate their properties in detail. First, we consider the problem of finding the optimal bound that can be computed solely from and and show that the so called Laguerre's lower bound is the optimal one in terms of sharpness. Next, we study the gap between the Laguerre bound and the smallest eigenvalue. We characterize the situation in which the gap becomes largest in terms of the eigenvalue distribution of and show that the gap becomes smallest when approaches 1 or . These results will be useful, for example, in designing efficient shift strategies for singular value computation algorithms
Rank Test Based On Matrix Perturbation Theory
In this paper, we propose methods of the determination of the rank of matrix. We consider a rank test for an unobserved matrix for which an estimate exists having normal asymptotic distribution of order N1/2 where N is the sample size. The test statistic is based on the smallest estimated singular values. Using Matrix Perturbation Theory, the smallest singular values of random matrix converge asymptotically to zero in the order O(N-1) and the corresponding left and right singular vectors converge asymptotically in the order O(N-1/2). Moreover, the asymptotic distribution of the test statistic is seen to be chi-squared. The test has advantages over standard tests in being easier to compute. Two approaches are be considered sequential testing strategy and information theoretic criterion. We establish a strongly consistent of the determination of the rank of matrix using both the two approaches. Some economic applications are discussed and simulation evidence is given for this test. Its performance is compared to that of the LDU rank tests of Gill and Lewbel (1992) and Cragg and Donald (1996).Rank Testing Matrix Perturbation Theory Rank Estimation Singular Value Decomposition Sequential Testing Procedure Information Theoretic Criterion
Numerical investigations of linear least squares methods for derivative estimation
The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument
Learning Mixtures of Gaussians in High Dimensions
Efficiently learning mixture of Gaussians is a fundamental problem in
statistics and learning theory. Given samples coming from a random one out of k
Gaussian distributions in Rn, the learning problem asks to estimate the means
and the covariance matrices of these Gaussians. This learning problem arises in
many areas ranging from the natural sciences to the social sciences, and has
also found many machine learning applications. Unfortunately, learning mixture
of Gaussians is an information theoretically hard problem: in order to learn
the parameters up to a reasonable accuracy, the number of samples required is
exponential in the number of Gaussian components in the worst case. In this
work, we show that provided we are in high enough dimensions, the class of
Gaussian mixtures is learnable in its most general form under a smoothed
analysis framework, where the parameters are randomly perturbed from an
adversarial starting point. In particular, given samples from a mixture of
Gaussians with randomly perturbed parameters, when n > {\Omega}(k^2), we give
an algorithm that learns the parameters with polynomial running time and using
polynomial number of samples. The central algorithmic ideas consist of new ways
to decompose the moment tensor of the Gaussian mixture by exploiting its
structural properties. The symmetries of this tensor are derived from the
combinatorial structure of higher order moments of Gaussian distributions
(sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop
new tools for bounding smallest singular values of structured random matrices,
which could be useful in other smoothed analysis settings
- ā¦