14,518 research outputs found

    Optimal free parameters in orthonormal approximations

    Get PDF
    We consider orthonormal expansions where the basis functions are governed by some free parameters. If the basis functions adhere to a certain differential or difference equation, then an expression can be given for a specific enforced convergence rate criterion as well as an upper bound for the quadratic truncation error. This expression is a function of the free parameters and some simple signal measurements. Restrictions on the differential or difference equation that make this possible are given. Minimization of either the upper bound or the enforced convergence criterion as a function of the free parameters yields the same optimal parameters, which are of a simple form. This method is applied to several continuous-time and discrete-time orthonormal expansions that are all related to classical orthogonal polynomial

    Beating Randomized Response on Incoherent Matrices

    Full text link
    Computing accurate low rank approximations of large matrices is a fundamental data mining task. In many applications however the matrix contains sensitive information about individuals. In such case we would like to release a low rank approximation that satisfies a strong privacy guarantee such as differential privacy. Unfortunately, to date the best known algorithm for this task that satisfies differential privacy is based on naive input perturbation or randomized response: Each entry of the matrix is perturbed independently by a sufficiently large random noise variable, a low rank approximation is then computed on the resulting matrix. We give (the first) significant improvements in accuracy over randomized response under the natural and necessary assumption that the matrix has low coherence. Our algorithm is also very efficient and finds a constant rank approximation of an m x n matrix in time O(mn). Note that even generating the noise matrix required for randomized response already requires time O(mn)

    Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations

    Full text link
    We analyze the convergence of compressive sensing based sampling techniques for the efficient evaluation of functionals of solutions for a class of high-dimensional, affine-parametric, linear operator equations which depend on possibly infinitely many parameters. The proposed algorithms are based on so-called "non-intrusive" sampling of the high-dimensional parameter space, reminiscent of Monte-Carlo sampling. In contrast to Monte-Carlo, however, a functional of the parametric solution is then computed via compressive sensing methods from samples of functionals of the solution. A key ingredient in our analysis of independent interest consists in a generalization of recent results on the approximate sparsity of generalized polynomial chaos representations (gpc) of the parametric solution families, in terms of the gpc series with respect to tensorized Chebyshev polynomials. In particular, we establish sufficient conditions on the parametric inputs to the parametric operator equation such that the Chebyshev coefficients of the gpc expansion are contained in certain weighted ℓp\ell_p-spaces for 0<p≤10<p\leq 1. Based on this we show that reconstructions of the parametric solutions computed from the sampled problems converge, with high probability, at the L2L_2, resp. L∞L_\infty convergence rates afforded by best ss-term approximations of the parametric solution up to logarithmic factors.Comment: revised version, 27 page

    Approximation of Eigenfunctions in Kernel-based Spaces

    Full text link
    Kernel-based methods in Numerical Analysis have the advantage of yielding optimal recovery processes in the "native" Hilbert space \calh in which they are reproducing. Continuous kernels on compact domains have an expansion into eigenfunctions that are both L2L_2-orthonormal and orthogonal in \calh (Mercer expansion). This paper examines the corresponding eigenspaces and proves that they have optimality properties among all other subspaces of \calh. These results have strong connections to nn-widths in Approximation Theory, and they establish that errors of optimal approximations are closely related to the decay of the eigenvalues. Though the eigenspaces and eigenvalues are not readily available, they can be well approximated using the standard nn-dimensional subspaces spanned by translates of the kernel with respect to nn nodes or centers. We give error bounds for the numerical approximation of the eigensystem via such subspaces. A series of examples shows that our numerical technique via a greedy point selection strategy allows to calculate the eigensystems with good accuracy

    A black-box rational Arnoldi variant for Cauchy-Stieltjes matrix functions

    Get PDF
    Rational Arnoldi is a powerful method for approximating functions of large sparse matrices times a vector. The selection of asymptotically optimal parameters for this method is crucial for its fast convergence. We present and investigate a novel strategy for the automated parameter selection when the function to be approximated is of Cauchy-Stieltjes (or Markov) type, such as the matrix square root or the logarithm. The performance of this approach is demonstrated by numerical examples involving symmetric and nonsymmetric matrices. These examples suggest that our black-box method performs at least as well, and typically better, as the standard rational Arnoldi method with parameters being manually optimized for a given matrix
    • …
    corecore