35,308 research outputs found
Far-Field Compression for Fast Kernel Summation Methods in High Dimensions
We consider fast kernel summations in high dimensions: given a large set of
points in dimensions (with ) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.Comment: 43 pages, 21 figure
On the Complexity and Approximation of Binary Evidence in Lifted Inference
Lifted inference algorithms exploit symmetries in probabilistic models to
speed up inference. They show impressive performance when calculating
unconditional probabilities in relational models, but often resort to
non-lifted inference when computing conditional probabilities. The reason is
that conditioning on evidence breaks many of the model's symmetries, which can
preempt standard lifting techniques. Recent theoretical results show, for
example, that conditioning on evidence which corresponds to binary relations is
#P-hard, suggesting that no lifting is to be expected in the worst case. In
this paper, we balance this negative result by identifying the Boolean rank of
the evidence as a key parameter for characterizing the complexity of
conditioning in lifted inference. In particular, we show that conditioning on
binary evidence with bounded Boolean rank is efficient. This opens up the
possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.Comment: To appear in Advances in Neural Information Processing Systems 26
(NIPS), Lake Tahoe, USA, December 201
An extension of Chebfun to two dimensions
An object-oriented MATLAB system is described that extends the capabilities of Chebfun to smooth functions of two variables defined on rectangles. Functions are approximated to essentially machine precision by using iterative Gaussian elimination with complete pivoting to form “chebfun2” objects representing low rank approximations. Operations such as integration, differentiation, function evaluation, and transforms are particularly efficient. Global optimization, the singular value decomposition, and rootfinding are also extended to chebfun2 objects. Numerical applications are presented
Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics
Quantum computing is powerful because unitary operators describing the
time-evolution of a quantum system have exponential size in terms of the number
of qubits present in the system. We develop a new "Singular value
transformation" algorithm capable of harnessing this exponential advantage,
that can apply polynomial transformations to the singular values of a block of
a unitary, generalizing the optimal Hamiltonian simulation results of Low and
Chuang. The proposed quantum circuits have a very simple structure, often give
rise to optimal algorithms and have appealing constant factors, while usually
only use a constant number of ancilla qubits. We show that singular value
transformation leads to novel algorithms. We give an efficient solution to a
certain "non-commutative" measurement problem and propose a new method for
singular value estimation. We also show how to exponentially improve the
complexity of implementing fractional queries to unitaries with a gapped
spectrum. Finally, as a quantum machine learning application we show how to
efficiently implement principal component regression. "Singular value
transformation" is conceptually simple and efficient, and leads to a unified
framework of quantum algorithms incorporating a variety of quantum speed-ups.
We illustrate this by showing how it generalizes a number of prominent quantum
algorithms, including: optimal Hamiltonian simulation, implementing the
Moore-Penrose pseudoinverse with exponential precision, fixed-point amplitude
amplification, robust oblivious amplitude amplification, fast QMA
amplification, fast quantum OR lemma, certain quantum walk results and several
quantum machine learning algorithms. In order to exploit the strengths of the
presented method it is useful to know its limitations too, therefore we also
prove a lower bound on the efficiency of singular value transformation, which
often gives optimal bounds.Comment: 67 pages, 1 figur
Weighted Polynomial Approximations: Limits for Learning and Pseudorandomness
Polynomial approximations to boolean functions have led to many positive
results in computer science. In particular, polynomial approximations to the
sign function underly algorithms for agnostically learning halfspaces, as well
as pseudorandom generators for halfspaces. In this work, we investigate the
limits of these techniques by proving inapproximability results for the sign
function.
Firstly, the polynomial regression algorithm of Kalai et al. (SIAM J. Comput.
2008) shows that halfspaces can be learned with respect to log-concave
distributions on in the challenging agnostic learning model. The
power of this algorithm relies on the fact that under log-concave
distributions, halfspaces can be approximated arbitrarily well by low-degree
polynomials. We ask whether this technique can be extended beyond log-concave
distributions, and establish a negative result. We show that polynomials of any
degree cannot approximate the sign function to within arbitrarily low error for
a large class of non-log-concave distributions on the real line, including
those with densities proportional to .
Secondly, we investigate the derandomization of Chernoff-type concentration
inequalities. Chernoff-type tail bounds on sums of independent random variables
have pervasive applications in theoretical computer science. Schmidt et al.
(SIAM J. Discrete Math. 1995) showed that these inequalities can be established
for sums of random variables with only -wise independence,
for a tail probability of . We show that their results are tight up to
constant factors.
These results rely on techniques from weighted approximation theory, which
studies how well functions on the real line can be approximated by polynomials
under various distributions. We believe that these techniques will have further
applications in other areas of computer science.Comment: 22 page
Solving polynomial eigenvalue problems by means of the Ehrlich-Aberth method
Given the matrix polynomial , we
consider the associated polynomial eigenvalue problem. This problem, viewed in
terms of computing the roots of the scalar polynomial , is treated
in polynomial form rather than in matrix form by means of the Ehrlich-Aberth
iteration. The main computational issues are discussed, namely, the choice of
the starting approximations needed to start the Ehrlich-Aberth iteration, the
computation of the Newton correction, the halting criterion, and the treatment
of eigenvalues at infinity. We arrive at an effective implementation which
provides more accurate approximations to the eigenvalues with respect to the
methods based on the QZ algorithm. The case of polynomials having special
structures, like palindromic, Hamiltonian, symplectic, etc., where the
eigenvalues have special symmetries in the complex plane, is considered. A
general way to adapt the Ehrlich-Aberth iteration to structured matrix
polynomial is introduced. Numerical experiments which confirm the effectiveness
of this approach are reported.Comment: Submitted to Linear Algebra App
- …