6,394 research outputs found
On the computation of Gaussian quadrature rules for Chebyshev sets of linearly independent functions
We consider the computation of quadrature rules that are exact for a
Chebyshev set of linearly independent functions on an interval . A
general theory of Chebyshev sets guarantees the existence of rules with a
Gaussian property, in the sense that basis functions can be integrated
exactly with just points and weights. Moreover, all weights are positive
and the points lie inside the interval . However, the points are not the
roots of an orthogonal polynomial or any other known special function as in the
case of regular Gaussian quadrature. The rules are characterized by a nonlinear
system of equations, and earlier numerical methods have mostly focused on
finding suitable starting values for a Newton iteration to solve this system.
In this paper we describe an alternative scheme that is robust and generally
applicable for so-called complete Chebyshev sets. These are ordered Chebyshev
sets where the first elements also form a Chebyshev set for each . The
points of the quadrature rule are computed one by one, increasing exactness of
the rule in each step. Each step reduces to finding the unique root of a
univariate and monotonic function. As such, the scheme of this paper is
guaranteed to succeed. The quadrature rules are of interest for integrals with
non-smooth integrands that are not well approximated by polynomials
Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions
Low-rank matrix approximations, such as the truncated singular value
decomposition and the rank-revealing QR decomposition, play a central role in
data analysis and scientific computing. This work surveys and extends recent
research which demonstrates that randomization offers a powerful tool for
performing low-rank matrix approximation. These techniques exploit modern
computational architectures more fully than classical methods and open the
possibility of dealing with truly massive data sets.
This paper presents a modular framework for constructing randomized
algorithms that compute partial matrix decompositions. These methods use random
sampling to identify a subspace that captures most of the action of a matrix.
The input matrix is then compressed---either explicitly or implicitly---to this
subspace, and the reduced matrix is manipulated deterministically to obtain the
desired low-rank factorization. In many cases, this approach beats its
classical competitors in terms of accuracy, speed, and robustness. These claims
are supported by extensive numerical experiments and a detailed error analysis
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
- …