1,756 research outputs found

    Robust Adaptive Least Squares Polynomial Chaos Expansions in High-Frequency Applications

    Full text link
    We present an algorithm for computing sparse, least squares-based polynomial chaos expansions, incorporating both adaptive polynomial bases and sequential experimental designs. The algorithm is employed to approximate stochastic high-frequency electromagnetic models in a black-box way, in particular, given only a dataset of random parameter realizations and the corresponding observations regarding a quantity of interest, typically a scattering parameter. The construction of the polynomial basis is based on a greedy, adaptive, sensitivity-related method. The sequential expansion of the experimental design employs different optimality criteria, with respect to the algebraic form of the least squares problem. We investigate how different conditions affect the robustness of the derived surrogate models, that is, how much the approximation accuracy varies given different experimental designs. It is found that relatively optimistic criteria perform on average better than stricter ones, yielding superior approximation accuracies for equal dataset sizes. However, the results of strict criteria are significantly more robust, as reduced variations regarding the approximation accuracy are obtained, over a range of experimental designs. Two criteria are proposed for a good accuracy-robustness trade-off.Comment: 17 pages, 7 figures, 2 table

    MATHICSE Technical Report : Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    Get PDF
    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev

    MATHICSE Technical Report : Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension

    Get PDF
    We analyze the accuracy of the discrete least-squares approximation of a function u in multivariate polynomial spaces PΛ:=span{yyννΛ}P_\Lambda := span\{y \mapsto y^\nu | \nu \in \Lambda\} with ΛN0d\Lambda\subset N_0^d over the domain Γ:=[1,1]d\Gamma := [-1,1]^d, based on the sampling of this function at points y1,...,ymΓy^1,...,y^m \in \Gamma. The samples are independently drawn according to a given probability density ρ\rho belonging to the class of multivariate beta densities, which includes the uniform and Chebyshev densities as particular cases. Motivated by recent results on high-dimensional parametric and stochastic PDEs, we restrict our attention to polynomial spaces associated with downward closed sets Λ\Lambda of prescribed cardinality n and we optimize the choice of the space for the given sample. This implies in particular that the selected polynomial space depends on the sample. We are interested in comparing the error of this least-squares approximation measured in L2(Γ,ρ)L^2(\Gamma,\rho) with the best achievable polynomial approximation error when using downward closed sets of cardinality n. We establish conditions between the dimension n and the size m of the sample, under which these two errors are proven to be comparable. Our main finding is that the dimension d enters only moderately in the resulting trade-off between m and n, in terms of a logarithmic factor ln(d), and is even absent when the optimization is restricted to a relevant subclass of downward closed sets, named anchored sets. In principle, this allows one to use these methods in arbitrarily high or even infinite dimension. Our analysis builds upon [3] which considered fixed and non-optimized downward closed multi-index sets. Potential applications of the proposed results are found in the development and analysis of efficient numerical methods for computing the solution of high-dimensional parametric or stochastic PDEs, but is not limited to this area

    Quadrature Strategies for Constructing Polynomial Approximations

    Full text link
    Finding suitable points for multivariate polynomial interpolation and approximation is a challenging task. Yet, despite this challenge, there has been tremendous research dedicated to this singular cause. In this paper, we begin by reviewing classical methods for finding suitable quadrature points for polynomial approximation in both the univariate and multivariate setting. Then, we categorize recent advances into those that propose a new sampling approach and those centered on an optimization strategy. The sampling approaches yield a favorable discretization of the domain, while the optimization methods pick a subset of the discretized samples that minimize certain objectives. While not all strategies follow this two-stage approach, most do. Sampling techniques covered include subsampling quadratures, Christoffel, induced and Monte Carlo methods. Optimization methods discussed range from linear programming ideas and Newton's method to greedy procedures from numerical linear algebra. Our exposition is aided by examples that implement some of the aforementioned strategies

    06391 Abstracts Collection -- Algorithms and Complexity for Continuous Problems

    Get PDF
    From 24.09.06 to 29.09.06, the Dagstuhl Seminar 06391 ``Algorithms and Complexity for Continuous Problems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    De Casteljau's algorithm in geometric data analysis: Theory and application

    Get PDF
    For decades, de Casteljau's algorithm has been used as a fundamental building block in curve and surface design and has found a wide range of applications in fields such as scientific computing and discrete geometry, to name but a few. With increasing interest in nonlinear data science, its constructive approach has been shown to provide a principled way to generalize parametric smooth curves to manifolds. These curves have found remarkable new applications in the analysis of parameter-dependent, geometric data. This article provides a survey of the recent theoretical developments in this exciting area as well as its applications in fields such as geometric morphometrics and longitudinal data analysis in medicine, archaeology, and meteorology
    corecore