6,722 research outputs found
Far-Field Compression for Fast Kernel Summation Methods in High Dimensions
We consider fast kernel summations in high dimensions: given a large set of
points in dimensions (with ) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure
A SVD accelerated kernel-independent fast multipole method and its application to BEM
The kernel-independent fast multipole method (KIFMM) proposed in [1] is of
almost linear complexity. In the original KIFMM the time-consuming M2L
translations are accelerated by FFT. However, when more equivalent points are
used to achieve higher accuracy, the efficiency of the FFT approach tends to be
lower because more auxiliary volume grid points have to be added. In this
paper, all the translations of the KIFMM are accelerated by using the singular
value decomposition (SVD) based on the low-rank property of the translating
matrices. The acceleration of M2L is realized by first transforming the
associated translating matrices into more compact form, and then using low-rank
approximations. By using the transform matrices for M2L, the orders of the
translating matrices in upward and downward passes are also reduced. The
improved KIFMM is then applied to accelerate BEM. The performance of the
proposed algorithms are demonstrated by three examples. Numerical results show
that, compared with the original KIFMM, the present method can reduce about 40%
of the iterating time and 25% of the memory requirement.Comment: 19 pages, 4 figure
- …