204 research outputs found

    Multibody Multipole Methods

    Full text link
    A three-body potential function can account for interactions among triples of particles which are uncaptured by pairwise interaction functions such as Coulombic or Lennard-Jones potentials. Likewise, a multibody potential of order nn can account for interactions among nn-tuples of particles uncaptured by interaction functions of lower orders. To date, the computation of multibody potential functions for a large number of particles has not been possible due to its O(Nn)O(N^n) scaling cost. In this paper we describe a fast tree-code for efficiently approximating multibody potentials that can be factorized as products of functions of pairwise distances. For the first time, we show how to derive a Barnes-Hut type algorithm for handling interactions among more than two particles. Our algorithm uses two approximation schemes: 1) a deterministic series expansion-based method; 2) a Monte Carlo-based approximation based on the central limit theorem. Our approach guarantees a user-specified bound on the absolute or relative error in the computed potential with an asymptotic probability guarantee. We provide speedup results on a three-body dispersion potential, the Axilrod-Teller potential.Comment: To appear in Journal of Computational Physic

    Far-Field Compression for Fast Kernel Summation Methods in High Dimensions

    Full text link
    We consider fast kernel summations in high dimensions: given a large set of points in dd dimensions (with d3d \gg 3) and a pair-potential function (the {\em kernel} function), we compute a weighted sum of all pairwise kernel interactions for each point in the set. Direct summation is equivalent to a (dense) matrix-vector multiplication and scales quadratically with the number of points. Fast kernel summation algorithms reduce this cost to log-linear or linear complexity. Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by constructing approximate representations of interactions of points that are far from each other. In algebraic terms, these representations correspond to low-rank approximations of blocks of the overall interaction matrix. Existing approaches require an excessive number of kernel evaluations with increasing dd and number of points in the dataset. To address this issue, we use a randomized algebraic approach in which we first sample the rows of a block and then construct its approximate, low-rank interpolative decomposition. We examine the feasibility of this approach theoretically and experimentally. We provide a new theoretical result showing a tighter bound on the reconstruction error from uniformly sampling rows than the existing state-of-the-art. We demonstrate that our sampling approach is competitive with existing (but prohibitively expensive) methods from the literature. We also construct kernel matrices for the Laplacian, Gaussian, and polynomial kernels -- all commonly used in physics and data analysis. We explore the numerical properties of blocks of these matrices, and show that they are amenable to our approach. Depending on the data set, our randomized algorithm can successfully compute low rank approximations in high dimensions. We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure

    ASKIT: Approximate Skeletonization Kernel-Independent Treecode in High Dimensions

    Full text link
    We present a fast algorithm for kernel summation problems in high-dimensions. These problems appear in computational physics, numerical approximation, non-parametric statistics, and machine learning. In our context, the sums depend on a kernel function that is a pair potential defined on a dataset of points in a high-dimensional Euclidean space. A direct evaluation of the sum scales quadratically with the number of points. Fast kernel summation methods can reduce this cost to linear complexity, but the constants involved do not scale well with the dimensionality of the dataset. The main algorithmic components of fast kernel summation algorithms are the separation of the kernel sum between near and far field (which is the basis for pruning) and the efficient and accurate approximation of the far field. We introduce novel methods for pruning and approximating the far field. Our far field approximation requires only kernel evaluations and does not use analytic expansions. Pruning is not done using bounding boxes but rather combinatorially using a sparsified nearest-neighbor graph of the input. The time complexity of our algorithm depends linearly on the ambient dimension. The error in the algorithm depends on the low-rank approximability of the far field, which in turn depends on the kernel function and on the intrinsic dimensionality of the distribution of the points. The error of the far field approximation does not depend on the ambient dimension. We present the new algorithm along with experimental results that demonstrate its performance. We report results for Gaussian kernel sums for 100 million points in 64 dimensions, for one million points in 1000 dimensions, and for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or prohibitively expensive with existing fast kernel summation methods.Comment: 22 pages, 6 figure

    RascalC: A Jackknife Approach to Estimating Single and Multi-Tracer Galaxy Covariance Matrices

    Full text link
    To make use of clustering statistics from large cosmological surveys, accurate and precise covariance matrices are needed. We present a new code to estimate large scale galaxy two-point correlation function (2PCF) covariances in arbitrary survey geometries that, due to new sampling techniques, runs 104\sim 10^4 times faster than previous codes, computing finely-binned covariance matrices with negligible noise in less than 100 CPU-hours. As in previous works, non-Gaussianity is approximated via a small rescaling of shot-noise in the theoretical model, calibrated by comparing jackknife survey covariances to an associated jackknife model. The flexible code, RascalC, has been publicly released, and automatically takes care of all necessary pre- and post-processing, requiring only a single input dataset (without a prior 2PCF model). Deviations between large scale model covariances from a mock survey and those from a large suite of mocks are found to be be indistinguishable from noise. In addition, the choice of input mock are shown to be irrelevant for desired noise levels below 105\sim 10^5 mocks. Coupled with its generalization to multi-tracer data-sets, this shows the algorithm to be an excellent tool for analysis, reducing the need for large numbers of mock simulations to be computed.Comment: 29 pages, 8 figures. Accepted by MNRAS. Code is available at http://github.com/oliverphilcox/RascalC with documentation at http://rascalc.readthedocs.io

    POKER: Estimating the power spectrum of diffuse emission with complex masks and at high angular resolution

    Full text link
    We describe the implementation of an angular power spectrum estimator in the flat sky approximation. POKER (P. Of k EstimatoR) is based on the MASTER algorithm developped by Hivon and collaborators in the context of CMB anisotropy. It works entirely in discrete space and can be applied to arbitrary high angular resolution maps. It is therefore particularly suitable for current and future infrared to sub-mm observations of diffuse emission, whether Galactic or cosmological.Comment: Astronomy and Astrophysics, in pres

    Numerical methods for computing Casimir interactions

    Full text link
    We review several different approaches for computing Casimir forces and related fluctuation-induced interactions between bodies of arbitrary shapes and materials. The relationships between this problem and well known computational techniques from classical electromagnetism are emphasized. We also review the basic principles of standard computational methods, categorizing them according to three criteria---choice of problem, basis, and solution technique---that can be used to classify proposals for the Casimir problem as well. In this way, mature classical methods can be exploited to model Casimir physics, with a few important modifications.Comment: 46 pages, 142 references, 5 figures. To appear in upcoming Lecture Notes in Physics book on Casimir Physic
    corecore