2,298 research outputs found
Fast Ewald summation for electrostatic potentials with arbitrary periodicity
A unified treatment for fast and spectrally accurate evaluation of
electrostatic potentials subject to periodic boundary conditions in any or none
of the three space dimensions is presented. Ewald decomposition is used to
split the problem into a real space and a Fourier space part, and the FFT based
Spectral Ewald (SE) method is used to accelerate the computation of the latter.
A key component in the unified treatment is an FFT based solution technique for
the free-space Poisson problem in three, two or one dimensions, depending on
the number of non-periodic directions. The cost of calculations is furthermore
reduced by employing an adaptive FFT for the doubly and singly periodic cases,
allowing for different local upsampling rates. The SE method will always be
most efficient for the triply periodic case as the cost for computing FFTs will
be the smallest, whereas the computational cost for the rest of the algorithm
is essentially independent of the periodicity. We show that the cost of
removing periodic boundary conditions from one or two directions out of three
will only marginally increase the total run time. Our comparisons also show
that the computational cost of the SE method for the free-space case is
typically about four times more expensive as compared to the triply periodic
case. The Gaussian window function previously used in the SE method, is here
compared to an approximation of the Kaiser-Bessel window function, recently
introduced. With a carefully tuned shape parameter that is selected based on an
error estimate for this new window function, runtimes for the SE method can be
further reduced. Keywords: Fast Ewald summation, Fast Fourier transform,
Arbitrary periodicity, Coulomb potentials, Adaptive FFT, Fourier integral,
Spectral accuracy.Comment: 21 pages, 11 figure
Learning Sparse High Dimensional Filters: Image Filtering, Dense CRFs and Bilateral Neural Networks
Bilateral filters have wide spread use due to their edge-preserving
properties. The common use case is to manually choose a parametric filter type,
usually a Gaussian filter. In this paper, we will generalize the
parametrization and in particular derive a gradient descent algorithm so the
filter parameters can be learned from data. This derivation allows to learn
high dimensional linear filters that operate in sparsely populated feature
spaces. We build on the permutohedral lattice construction for efficient
filtering. The ability to learn more general forms of high-dimensional filters
can be used in several diverse applications. First, we demonstrate the use in
applications where single filter applications are desired for runtime reasons.
Further, we show how this algorithm can be used to learn the pairwise
potentials in densely connected conditional random fields and apply these to
different image segmentation tasks. Finally, we introduce layers of bilateral
filters in CNNs and propose bilateral neural networks for the use of
high-dimensional sparse data. This view provides new ways to encode model
structure into network architectures. A diverse set of experiments empirically
validates the usage of general forms of filters
Frame Theory for Signal Processing in Psychoacoustics
This review chapter aims to strengthen the link between frame theory and
signal processing tasks in psychoacoustics. On the one side, the basic concepts
of frame theory are presented and some proofs are provided to explain those
concepts in some detail. The goal is to reveal to hearing scientists how this
mathematical theory could be relevant for their research. In particular, we
focus on frame theory in a filter bank approach, which is probably the most
relevant view-point for audio signal processing. On the other side, basic
psychoacoustic concepts are presented to stimulate mathematicians to apply
their knowledge in this field
Compressive PCA for Low-Rank Matrices on Graphs
We introduce a novel framework for an approxi- mate recovery of data matrices
which are low-rank on graphs, from sampled measurements. The rows and columns
of such matrices belong to the span of the first few eigenvectors of the graphs
constructed between their rows and columns. We leverage this property to
recover the non-linear low-rank structures efficiently from sampled data
measurements, with a low cost (linear in n). First, a Resrtricted Isometry
Property (RIP) condition is introduced for efficient uniform sampling of the
rows and columns of such matrices based on the cumulative coherence of graph
eigenvectors. Secondly, a state-of-the-art fast low-rank recovery method is
suggested for the sampled data. Finally, several efficient, parallel and
parameter-free decoders are presented along with their theoretical analysis for
decoding the low-rank and cluster indicators for the full data matrix. Thus, we
overcome the computational limitations of the standard linear low-rank recovery
methods for big datasets. Our method can also be seen as a major step towards
efficient recovery of non- linear low-rank structures. For a matrix of size n X
p, on a single core machine, our method gains a speed up of over Robust
Principal Component Analysis (RPCA), where k << p is the subspace dimension.
Numerically, we can recover a low-rank matrix of size 10304 X 1000, 100 times
faster than Robust PCA
- …