2,650 research outputs found
A Nonuniform Fast Fourier Transform Based on Low Rank Approximation
By viewing the nonuniform discrete Fourier transform (NUDFT) as a perturbed version of a uniform discrete Fourier transform, we propose a fast and quasi-optimal algorithm for computing the NUDFT based on the fast Fourier transform (FFT). Our key observation is that an NUDFT and DFT matrix divided entry by entry is often well approximated by a low rank matrix, allowing us to express a NUDFT matrix as a sum of diagonally scaled DFT matrices. Our algorithm is simple to implement, automatically adapts to any working precision, and is competitive with state-of-the-art algorithms. In the fully uniform case, our algorithm is essentially the FFT. We also describe quasi-optimal algorithms for the inverse NUDFT and two-dimensional NUDFTs.The first author’s work was supported by Ministerio de Economía y Competitividad (reference BES-2013-064743). The second author’s work was supported by National Science Foundation grant 164544
Fast Computation of Fourier Integral Operators
We introduce a general purpose algorithm for rapidly computing certain types
of oscillatory integrals which frequently arise in problems connected to wave
propagation and general hyperbolic equations. The problem is to evaluate
numerically a so-called Fourier integral operator (FIO) of the form at points given on
a Cartesian grid. Here, is a frequency variable, is the
Fourier transform of the input , is an amplitude and
is a phase function, which is typically as large as ;
hence the integral is highly oscillatory at high frequencies. Because an FIO is
a dense matrix, a naive matrix vector product with an input given on a
Cartesian grid of size by would require operations.
This paper develops a new numerical algorithm which requires operations, and as low as in storage space. It operates by
localizing the integral over polar wedges with small angular aperture in the
frequency plane. On each wedge, the algorithm factorizes the kernel into two components: 1) a diffeomorphism which is
handled by means of a nonuniform FFT and 2) a residual factor which is handled
by numerical separation of the spatial and frequency variables. The key to the
complexity and accuracy estimates is that the separation rank of the residual
kernel is \emph{provably independent of the problem size}. Several numerical
examples demonstrate the efficiency and accuracy of the proposed methodology.
We also discuss the potential of our ideas for various applications such as
reflection seismology.Comment: 31 pages, 3 figure
A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation
The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used
extensively in astrophysics for data collection and analysis on the sphere. The
scheme was originally designed for studying the Cosmic Microwave Background
(CMB) radiation, which represents the first light to travel during the early
stages of the universe's development and gives the strongest evidence for the
Big Bang theory to date. Refined analysis of the CMB angular power spectrum can
lead to revolutionary developments in understanding the nature of dark matter
and dark energy. In this paper, we present a new method for performing
spherical harmonic analysis for HEALPix data, which is a central component to
computing and analyzing the angular power spectrum of the massive CMB data
sets. The method uses a novel combination of a non-uniform fast Fourier
transform, the double Fourier sphere method, and Slevinsky's fast spherical
harmonic transform (Slevinsky, 2019). For a HEALPix grid with pixels
(points), the computational complexity of the method is , with an initial set-up cost of . This compares
favorably with runtime complexity of the current methods
available in the HEALPix software when multiple maps need to be analyzed at the
same time. Using numerical experiments, we demonstrate that the new method also
appears to provide better accuracy over the entire angular power spectrum of
synthetic data when compared to the current methods, with a convergence rate at
least two times higher
Revisiting the Nystrom Method for Improved Large-Scale Machine Learning
We reconsider randomized algorithms for the low-rank approximation of
symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel
matrices that arise in data analysis and machine learning applications. Our
main results consist of an empirical evaluation of the performance quality and
running time of sampling and projection methods on a diverse suite of SPSD
matrices. Our results highlight complementary aspects of sampling versus
projection methods; they characterize the effects of common data preprocessing
steps on the performance of these algorithms; and they point to important
differences between uniform sampling and nonuniform sampling methods based on
leverage scores. In addition, our empirical results illustrate that existing
theory is so weak that it does not provide even a qualitative guide to
practice. Thus, we complement our empirical results with a suite of worst-case
theoretical bounds for both random sampling and random projection methods.
These bounds are qualitatively superior to existing bounds---e.g. improved
additive-error bounds for spectral and Frobenius norm error and relative-error
bounds for trace norm error---and they point to future directions to make these
algorithms useful in even larger-scale machine learning applications.Comment: 60 pages, 15 color figures; updated proof of Frobenius norm bounds,
added comparison to projection-based low-rank approximations, and an analysis
of the power method applied to SPSD sketche
- …