1,756 research outputs found
Robust Adaptive Least Squares Polynomial Chaos Expansions in High-Frequency Applications
We present an algorithm for computing sparse, least squares-based polynomial
chaos expansions, incorporating both adaptive polynomial bases and sequential
experimental designs. The algorithm is employed to approximate stochastic
high-frequency electromagnetic models in a black-box way, in particular, given
only a dataset of random parameter realizations and the corresponding
observations regarding a quantity of interest, typically a scattering
parameter. The construction of the polynomial basis is based on a greedy,
adaptive, sensitivity-related method. The sequential expansion of the
experimental design employs different optimality criteria, with respect to the
algebraic form of the least squares problem. We investigate how different
conditions affect the robustness of the derived surrogate models, that is, how
much the approximation accuracy varies given different experimental designs. It
is found that relatively optimistic criteria perform on average better than
stricter ones, yielding superior approximation accuracies for equal dataset
sizes. However, the results of strict criteria are significantly more robust,
as reduced variations regarding the approximation accuracy are obtained, over a
range of experimental designs. Two criteria are proposed for a good
accuracy-robustness trade-off.Comment: 17 pages, 7 figures, 2 table
MATHICSE Technical Report : Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev
MATHICSE Technical Report : Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension
We analyze the accuracy of the discrete least-squares approximation of a function u in multivariate polynomial spaces with over the domain , based on the sampling of this function at points . The samples are independently drawn according to a given probability density belonging to the class of multivariate beta densities, which includes the uniform and Chebyshev densities as particular cases. Motivated by recent results on high-dimensional parametric and stochastic PDEs, we restrict our attention to polynomial spaces associated with downward closed sets of prescribed cardinality n and we optimize the choice of the space for the given sample. This implies in particular that the selected polynomial space depends on the sample. We are interested in comparing the error of this least-squares approximation measured in with the best achievable polynomial approximation error when using downward closed sets of cardinality n. We establish conditions between the dimension n and the size m of the sample, under which these two errors are proven to be comparable. Our main finding is that the dimension d enters only moderately in the resulting trade-off between m and n, in terms of a logarithmic factor ln(d), and is even absent when the optimization is restricted to a relevant subclass of downward closed sets, named anchored sets. In principle, this allows one to use these methods in arbitrarily high or even infinite dimension. Our analysis builds upon [3] which considered fixed and non-optimized downward closed multi-index sets. Potential applications of the proposed results are found in the development and analysis of efficient numerical methods for computing the solution of high-dimensional parametric or stochastic PDEs, but is not limited to this area
Quadrature Strategies for Constructing Polynomial Approximations
Finding suitable points for multivariate polynomial interpolation and
approximation is a challenging task. Yet, despite this challenge, there has
been tremendous research dedicated to this singular cause. In this paper, we
begin by reviewing classical methods for finding suitable quadrature points for
polynomial approximation in both the univariate and multivariate setting. Then,
we categorize recent advances into those that propose a new sampling approach
and those centered on an optimization strategy. The sampling approaches yield a
favorable discretization of the domain, while the optimization methods pick a
subset of the discretized samples that minimize certain objectives. While not
all strategies follow this two-stage approach, most do. Sampling techniques
covered include subsampling quadratures, Christoffel, induced and Monte Carlo
methods. Optimization methods discussed range from linear programming ideas and
Newton's method to greedy procedures from numerical linear algebra. Our
exposition is aided by examples that implement some of the aforementioned
strategies
Recommended from our members
Multiscale and High-Dimensional Problems
High-dimensional problems appear naturally in various scientific areas. Two primary examples are PDEs describing complex processes in computational chemistry and physics, and stochastic/ parameter-dependent PDEs arising in uncertainty quantification and optimal control. Other highly visible examples are big data analysis including regression and classification which typically encounters high-dimensional data as input and/or output. High dimensional problems cannot be solved by traditional numerical techniques, because of the so-called curse of dimensionality. Rather, they require the development of novel theoretical and computational approaches to make them tractable and to capture fine resolutions and relevant features. Paradoxically, increasing computational power may even serve to heighten this demand, since the wealth of new computational data itself becomes a major obstruction. Extracting essential information from complex structures and developing rigorous models to quantify the quality of information in a high dimensional setting constitute challenging tasks from both theoretical and numerical perspective.
The last decade has seen the emergence of several new computational methodologies which address the obstacles to solving high dimensional problems. These include adaptive methods based on mesh refinement or sparsity, random forests, model reduction, compressed sensing, sparse grid and hyperbolic wavelet approximations, and various new tensor structures. Their common features are the nonlinearity of the solution method that prioritize variables and separate solution characteristics living on different scales. These methods have already drastically advanced the frontiers of computability for certain problem classes.
This workshop proposed to deepen the understanding of the underlying mathematical concepts that drive this new evolution of computational methods and to promote the exchange of ideas emerging in various disciplines about how to treat multiscale and high-dimensional problems
06391 Abstracts Collection -- Algorithms and Complexity for Continuous Problems
From 24.09.06 to 29.09.06, the Dagstuhl Seminar 06391 ``Algorithms and Complexity for Continuous Problems\u27\u27 was held
in the International Conference and Research Center (IBFI),
Schloss Dagstuhl.
During the seminar, participants presented their current
research, and ongoing work and open problems were discussed. Abstracts of
the presentations given during the seminar
are put together in this paper. The first section
describes the seminar topics and goals in general.
Links to extended abstracts or full papers are provided, if available
De Casteljau's algorithm in geometric data analysis: Theory and application
For decades, de Casteljau's algorithm has been used as a fundamental building block in curve and surface design and has found a wide range of applications in fields such as scientific computing and discrete geometry, to name but a few. With increasing interest in nonlinear data science, its constructive approach has been shown to provide a principled way to generalize parametric smooth curves to manifolds. These curves have found remarkable new applications in the analysis of parameter-dependent, geometric data. This article provides a survey of the recent theoretical developments in this exciting area as well as its applications in fields such as geometric morphometrics and longitudinal data analysis in medicine, archaeology, and meteorology
- …