499 research outputs found
Computing the Conditioning of the Components of a Linear Least Squares Solution
In this paper, we address the accuracy of the results for the overdetermined
full rank linear least squares problem. We recall theoretical results obtained
in Arioli, Baboulin and Gratton, SIMAX 29(2):413--433, 2007, on conditioning of
the least squares solution and the components of the solution when the matrix
perturbations are measured in Frobenius or spectral norms. Then we define
computable estimates for these condition numbers and we interpret them in terms
of statistical quantities. In particular, we show that, in the classical linear
statistical model, the ratio of the variance of one component of the solution
by the variance of the right-hand side is exactly the condition number of this
solution component when perturbations on the right-hand side are considered. We
also provide fragment codes using LAPACK routines to compute the
variance-covariance matrix and the least squares conditioning and we give the
corresponding computational cost. Finally we present a small historical
numerical example that was used by Laplace in Theorie Analytique des
Probabilites, 1820, for computing the mass of Jupiter and experiments from the
space industry with real physical data
Second-order Shape Optimization for Geometric Inverse Problems in Vision
We develop a method for optimization in shape spaces, i.e., sets of surfaces
modulo re-parametrization. Unlike previously proposed gradient flows, we
achieve superlinear convergence rates through a subtle approximation of the
shape Hessian, which is generally hard to compute and suffers from a series of
degeneracies. Our analysis highlights the role of mean curvature motion in
comparison with first-order schemes: instead of surface area, our approach
penalizes deformation, either by its Dirichlet energy or total variation.
Latter regularizer sparks the development of an alternating direction method of
multipliers on triangular meshes. Therein, a conjugate-gradients solver enables
us to bypass formation of the Gaussian normal equations appearing in the course
of the overall optimization. We combine all of the aforementioned ideas in a
versatile geometric variation-regularized Levenberg-Marquardt-type method
applicable to a variety of shape functionals, depending on intrinsic properties
of the surface such as normal field and curvature as well as its embedding into
space. Promising experimental results are reported
How Ordinary Elimination Became Gaussian Elimination
Newton, in notes that he would rather not have seen published, described a
process for solving simultaneous equations that later authors applied
specifically to linear equations. This method that Euler did not recommend,
that Legendre called "ordinary," and that Gauss called "common" - is now named
after Gauss: "Gaussian" elimination. Gauss's name became associated with
elimination through the adoption, by professional computers, of a specialized
notation that Gauss devised for his own least squares calculations. The
notation allowed elimination to be viewed as a sequence of arithmetic
operations that were repeatedly optimized for hand computing and eventually
were described by matrices.Comment: 56 pages, 21 figures, 1 tabl
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
Recommended from our members
A computational study of using black-box QR solvers for large-scale sparse-dense linear least squares problems
Large-scale overdetermined linear least squares problems arise in many practical applications. One popular solution method is based on the backward stable QR factorization of the system matrix A . This article focuses on sparse-dense least squares problems in which A is sparse except from a small number of rows that are considered dense. For large-scale problems, the direct application of a QR solver either fails because of insufficient memory or is unacceptably slow. We study several solution approaches based on using a sparse QR solver without modification, focussing on the case that the sparse part of A is rank deficient. We discuss partial matrix stretching and regularization and propose extending the augmented system formulation with iterative refinement for sparse problems to sparse-dense problems, optionally incorporating multi-precision arithmetic. In summary, our computational study shows that, before applying a black-box QR factorization, a check should be made for rows that are classified as dense and, if such rows are identified, then A should be split into sparse and dense blocks; a number of ways to use a black-box QR factorization to exploit this splitting are possible, with no single method found to be the best in all cases
A test method for analog circuits : using sensitivity analysis and the singular value decomposition
XVII+C.3hlm.;24c
- …