1,767 research outputs found

    The Linear Least Squares Problem of Bundle Adjustment

    Get PDF
    A method is described for finding the least squares solution of the overdetermined linear system that arises in the photogrammetric problem of bundle adjustment of aerial photographs. Because of the sparse, blocked structure of the coefficient matrix of the linear system, the proposed method is based on sparse QR factorization using Givens rotations. A reordering of the rows and columns of the matrix greatly reduces the fill-in during the factorization. Rules which predict the fill-in for this ordering are proven based upon the block structure of the matrix. These rules eliminate the need for the usual symbolic factorization in most cases. A subroutine library that implements the proposed method is listed. Timings and populations of a range of test problems are given

    On large-scale diagonalization techniques for the Anderson model of localization

    Get PDF
    We propose efficient preconditioning algorithms for an eigenvalue problem arising in quantum physics, namely the computation of a few interior eigenvalues and their associated eigenvectors for large-scale sparse real and symmetric indefinite matrices of the Anderson model of localization. We compare the Lanczos algorithm in the 1987 implementation by Cullum and Willoughby with the shift-and-invert techniques in the implicitly restarted Lanczos method and in the Jacobi–Davidson method. Our preconditioning approaches for the shift-and-invert symmetric indefinite linear system are based on maximum weighted matchings and algebraic multilevel incomplete LDLT factorizations. These techniques can be seen as a complement to the alternative idea of using more complete pivoting techniques for the highly ill-conditioned symmetric indefinite Anderson matrices. We demonstrate the effectiveness and the numerical accuracy of these algorithms. Our numerical examples reveal that recent algebraic multilevel preconditioning solvers can accelerate the computation of a large-scale eigenvalue problem corresponding to the Anderson model of localization by several orders of magnitude

    Predicting the structure of sparse orthogonal factors

    Get PDF
    AbstractThe problem of correctly predicting the structures of the orthogonal factors Q and R from the structure of a matrix A with full column rank is considered. Recently Hare, Johnson, Olesky, and van den Driessche have described a method to predict these structures, and they have shown that corresponding to any specified nonzero element in the predicted structures of Q or R, there exists a matrix with the given structure whose factor has a nonzero in that position. In this paper this method is shown to satisfy a stronger property: there exist matrices with the structure of A whose factors have exactly the predicted structures. These results use matching theory, the Dulmage-Mendelsohn decomposition of bipartite graphs, and techniques from algebra. The proof technique shows that if values are assigned randomly to the nonzeros in A, then with high probability the elements predicted to be nonzero in the factors have nonzero values. It is shown that this stronger requirement cannot be satisfied for orthogonal factorization with column pivoting. In addition, efficient algorithms for computing the structures of the factors are designed, and the relationship between the structure of Q and the Householder array is described

    Stability of matrix factorization for collaborative filtering

    Full text link
    We study the stability vis a vis adversarial noise of matrix factorization algorithm for matrix completion. In particular, our results include: (I) we bound the gap between the solution matrix of the factorization method and the ground truth in terms of root mean square error; (II) we treat the matrix factorization as a subspace fitting problem and analyze the difference between the solution subspace and the ground truth; (III) we analyze the prediction error of individual users based on the subspace stability. We apply these results to the problem of collaborative filtering under manipulator attack, which leads to useful insights and guidelines for collaborative filtering system design.Comment: ICML201

    A dual framework for low-rank tensor completion

    Full text link
    One of the popular approaches for low-rank tensor completion is to use the latent trace norm regularization. However, most existing works in this direction learn a sparse combination of tensors. In this work, we fill this gap by proposing a variant of the latent trace norm that helps in learning a non-sparse combination of tensors. We develop a dual framework for solving the low-rank tensor completion problem. We first show a novel characterization of the dual solution space with an interesting factorization of the optimal solution. Overall, the optimal solution is shown to lie on a Cartesian product of Riemannian manifolds. Furthermore, we exploit the versatile Riemannian optimization framework for proposing computationally efficient trust region algorithm. The experiments illustrate the efficacy of the proposed algorithm on several real-world datasets across applications.Comment: Aceepted to appear in Advances of Nueral Information Processing Systems (NIPS), 2018. A shorter version appeared in the NIPS workshop on Synergies in Geometric Data Analysis 201
    corecore