2,020 research outputs found

    MATHICSE Technical Report : Analysis of discrete least squares on multivariate polynomial spaces with evaluations at low-discrepancy point sets

    Get PDF
    We analyze the stability and accuracy of discrete least squares on multivariate poly- nomial spaces to approximate a given function depending on a multivariate random variable uniformly distributed on a hypercube. The polynomial approximation is calculated starting from pointwise noise-free evaluations of the target function at low- discrepancy point sets. We prove that the discrete least-squares approximation, in a multivariate anisotropic tensor product polynomial space and with evaluations at low-discrepancy point sets, is stable and accurate under the condition that the number of evaluations is proportional to the square of the dimension of the polynomial space, up to logarithmic factors. This result is analogous to those obtained in [7, 22, 19, 6] for discrete least squares with random point sets, however it holds with certainty instead of just with high probability. The result is further generalized to arbitrary polynomial spaces associated with downward closed multi-index sets, but with a more demanding (and probably nonoptimal) proportionality between the number of evaluation points and the dimension of the polynomial space

    Caratheodory-Tchakaloff Subsampling

    Get PDF
    We present a brief survey on the compression of discrete measures by Caratheodory-Tchakaloff Subsampling, its implementation by Linear or Quadratic Programming and the application to multivariate polynomial Least Squares. We also give an algorithm that computes the corresponding Caratheodory-Tchakaloff (CATCH) points and weights for polynomial spaces on compact sets and manifolds in 2D and 3D

    Robust Adaptive Least Squares Polynomial Chaos Expansions in High-Frequency Applications

    Full text link
    We present an algorithm for computing sparse, least squares-based polynomial chaos expansions, incorporating both adaptive polynomial bases and sequential experimental designs. The algorithm is employed to approximate stochastic high-frequency electromagnetic models in a black-box way, in particular, given only a dataset of random parameter realizations and the corresponding observations regarding a quantity of interest, typically a scattering parameter. The construction of the polynomial basis is based on a greedy, adaptive, sensitivity-related method. The sequential expansion of the experimental design employs different optimality criteria, with respect to the algebraic form of the least squares problem. We investigate how different conditions affect the robustness of the derived surrogate models, that is, how much the approximation accuracy varies given different experimental designs. It is found that relatively optimistic criteria perform on average better than stricter ones, yielding superior approximation accuracies for equal dataset sizes. However, the results of strict criteria are significantly more robust, as reduced variations regarding the approximation accuracy are obtained, over a range of experimental designs. Two criteria are proposed for a good accuracy-robustness trade-off.Comment: 17 pages, 7 figures, 2 table

    MATHICSE Technical Report : Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    Get PDF
    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev

    Some Results on the Complexity of Numerical Integration

    Full text link
    This is a survey (21 pages, 124 references) written for the MCQMC 2014 conference in Leuven, April 2014. We start with the seminal paper of Bakhvalov (1959) and end with new results on the curse of dimension and on the complexity of oscillatory integrals. Some small errors of earlier versions are corrected

    MATHICSE Technical Report : Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension

    Get PDF
    We analyze the accuracy of the discrete least-squares approximation of a function u in multivariate polynomial spaces PΛ:=span{yyννΛ}P_\Lambda := span\{y \mapsto y^\nu | \nu \in \Lambda\} with ΛN0d\Lambda\subset N_0^d over the domain Γ:=[1,1]d\Gamma := [-1,1]^d, based on the sampling of this function at points y1,...,ymΓy^1,...,y^m \in \Gamma. The samples are independently drawn according to a given probability density ρ\rho belonging to the class of multivariate beta densities, which includes the uniform and Chebyshev densities as particular cases. Motivated by recent results on high-dimensional parametric and stochastic PDEs, we restrict our attention to polynomial spaces associated with downward closed sets Λ\Lambda of prescribed cardinality n and we optimize the choice of the space for the given sample. This implies in particular that the selected polynomial space depends on the sample. We are interested in comparing the error of this least-squares approximation measured in L2(Γ,ρ)L^2(\Gamma,\rho) with the best achievable polynomial approximation error when using downward closed sets of cardinality n. We establish conditions between the dimension n and the size m of the sample, under which these two errors are proven to be comparable. Our main finding is that the dimension d enters only moderately in the resulting trade-off between m and n, in terms of a logarithmic factor ln(d), and is even absent when the optimization is restricted to a relevant subclass of downward closed sets, named anchored sets. In principle, this allows one to use these methods in arbitrarily high or even infinite dimension. Our analysis builds upon [3] which considered fixed and non-optimized downward closed multi-index sets. Potential applications of the proposed results are found in the development and analysis of efficient numerical methods for computing the solution of high-dimensional parametric or stochastic PDEs, but is not limited to this area
    corecore