720 research outputs found
Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations
We analyze the convergence of compressive sensing based sampling techniques
for the efficient evaluation of functionals of solutions for a class of
high-dimensional, affine-parametric, linear operator equations which depend on
possibly infinitely many parameters. The proposed algorithms are based on
so-called "non-intrusive" sampling of the high-dimensional parameter space,
reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a
functional of the parametric solution is then computed via compressive sensing
methods from samples of functionals of the solution. A key ingredient in our
analysis of independent interest consists in a generalization of recent results
on the approximate sparsity of generalized polynomial chaos representations
(gpc) of the parametric solution families, in terms of the gpc series with
respect to tensorized Chebyshev polynomials. In particular, we establish
sufficient conditions on the parametric inputs to the parametric operator
equation such that the Chebyshev coefficients of the gpc expansion are
contained in certain weighted -spaces for . Based on this we
show that reconstructions of the parametric solutions computed from the sampled
problems converge, with high probability, at the , resp.
convergence rates afforded by best -term approximations of the parametric
solution up to logarithmic factors.Comment: revised version, 27 page
A non-adapted sparse approximation of PDEs with stochastic inputs
We propose a method for the approximation of solutions of PDEs with
stochastic coefficients based on the direct, i.e., non-adapted, sampling of
solutions. This sampling can be done by using any legacy code for the
deterministic problem as a black box. The method converges in probability (with
probabilistic error bounds) as a consequence of sparsity and a concentration of
measure phenomenon on the empirical correlation between samples. We show that
the method is well suited for truly high-dimensional problems (with slow decay
in the spectrum)
Numerical Methods for the Fractional Laplacian: a Finite Difference-quadrature Approach
The fractional Laplacian is a non-local operator which
depends on the parameter and recovers the usual Laplacian as . A numerical method for the fractional Laplacian is proposed, based on
the singular integral representation for the operator. The method combines
finite difference with numerical quadrature, to obtain a discrete convolution
operator with positive weights. The accuracy of the method is shown to be
. Convergence of the method is proven. The treatment of far
field boundary conditions using an asymptotic approximation to the integral is
used to obtain an accurate method. Numerical experiments on known exact
solutions validate the predicted convergence rates. Computational examples
include exponentially and algebraically decaying solution with varying
regularity. The generalization to nonlinear equations involving the operator is
discussed: the obstacle problem for the fractional Laplacian is computed.Comment: 29 pages, 9 figure
Elimination for generic sparse polynomial systems
We present a new probabilistic symbolic algorithm that, given a variety
defined in an n-dimensional affine space by a generic sparse system with fixed
supports, computes the Zariski closure of its projection to an l-dimensional
coordinate affine space with l < n. The complexity of the algorithm depends
polynomially on combinatorial invariants associated to the supports.Comment: 22 page
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
We improve the current best running time value to invert sparse matrices over
finite fields, lowering it to an expected time for the
current values of fast rectangular matrix multiplication. We achieve the same
running time for the computation of the rank and nullspace of a sparse matrix
over a finite field. This improvement relies on two key techniques. First, we
adopt the decomposition of an arbitrary matrix into block Krylov and Hankel
matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the
explicit inverse of a block Hankel matrix using low displacement rank
techniques for structured matrices and fast rectangular matrix multiplication
algorithms. We generalize our inversion method to block structured matrices
with other displacement operators and strengthen the best known upper bounds
for explicit inversion of block Toeplitz-like and block Hankel-like matrices,
as well as for explicit inversion of block Vandermonde-like matrices with
structured blocks. As a further application, we improve the complexity of
several algorithms in topological data analysis and in finite group theory
- …