1,262 research outputs found
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or
implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k))
floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data
A direct solver with O(N) complexity for variable coefficient elliptic PDEs discretized via a high-order composite spectral collocation method
A numerical method for solving elliptic PDEs with variable coefficients on
two-dimensional domains is presented. The method is based on high-order
composite spectral approximations and is designed for problems with smooth
solutions. The resulting system of linear equations is solved using a direct
(as opposed to iterative) solver that has optimal O(N) complexity for all
stages of the computation when applied to problems with non-oscillatory
solutions such as the Laplace and the Stokes equations. Numerical examples
demonstrate that the scheme is capable of computing solutions with relative
accuracy of or better, even for challenging problems such as highly
oscillatory Helmholtz problems and convection-dominated convection diffusion
equations. In terms of speed, it is demonstrated that a problem with a
non-oscillatory solution that was discretized using nodes was solved
in 115 minutes on a personal work-station with two quad-core 3.3GHz CPUs. Since
the solver is direct, and the "solution operator" fits in RAM, any solves
beyond the first are very fast. In the example with unknowns, solves
require only 30 seconds.Comment: arXiv admin note: text overlap with arXiv:1302.599
Ostracism and the Provision of a Public Good, Experimental Evidence
We analyze the effects of ostracism on cooperation in a linear public good experiment. Our results show that introducing ostracism increases contributions. Despite reductions in group size due to ostracism, the net effect on earnings is positive and significant.Experiment, Public Good, Ostracism
A high-order Nystrom discretization scheme for boundary integral equations defined on rotationally symmetric surfaces
A scheme for rapidly and accurately computing solutions to boundary integral
equations (BIEs) on rotationally symmetric surfaces in R^3 is presented. The
scheme uses the Fourier transform to reduce the original BIE defined on a
surface to a sequence of BIEs defined on a generating curve for the surface. It
can handle loads that are not necessarily rotationally symmetric. Nystrom
discretization is used to discretize the BIEs on the generating curve. The
quadrature is a high-order Gaussian rule that is modified near the diagonal to
retain high-order accuracy for singular kernels. The reduction in
dimensionality, along with the use of high-order accurate quadratures, leads to
small linear systems that can be inverted directly via, e.g., Gaussian
elimination. This makes the scheme particularly fast in environments involving
multiple right hand sides. It is demonstrated that for BIEs associated with the
Laplace and Helmholtz equations, the kernel in the reduced equations can be
evaluated very rapidly by exploiting recursion relations for Legendre
functions. Numerical examples illustrate the performance of the scheme; in
particular, it is demonstrated that for a BIE associated with Laplace's
equation on a surface discretized using 320,800 points, the set-up phase of the
algorithm takes 1 minute on a standard laptop, and then solves can be executed
in 0.5 seconds.Comment: arXiv admin note: substantial text overlap with
arXiv:1012.56301002.200
- …
