8 research outputs found

    MATHICSE Technical Report : Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    Get PDF
    We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev

    Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points

    No full text
    We study the accuracy of the discrete least-squares approximation on a finite-dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise/offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev

    MATHICSE Technical Report : Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension

    Get PDF
    We analyze the accuracy of the discrete least-squares approximation of a function u in multivariate polynomial spaces PΛ:=span{yyννΛ}P_\Lambda := span\{y \mapsto y^\nu | \nu \in \Lambda\} with ΛN0d\Lambda\subset N_0^d over the domain Γ:=[1,1]d\Gamma := [-1,1]^d, based on the sampling of this function at points y1,...,ymΓy^1,...,y^m \in \Gamma. The samples are independently drawn according to a given probability density ρ\rho belonging to the class of multivariate beta densities, which includes the uniform and Chebyshev densities as particular cases. Motivated by recent results on high-dimensional parametric and stochastic PDEs, we restrict our attention to polynomial spaces associated with downward closed sets Λ\Lambda of prescribed cardinality n and we optimize the choice of the space for the given sample. This implies in particular that the selected polynomial space depends on the sample. We are interested in comparing the error of this least-squares approximation measured in L2(Γ,ρ)L^2(\Gamma,\rho) with the best achievable polynomial approximation error when using downward closed sets of cardinality n. We establish conditions between the dimension n and the size m of the sample, under which these two errors are proven to be comparable. Our main finding is that the dimension d enters only moderately in the resulting trade-off between m and n, in terms of a logarithmic factor ln(d), and is even absent when the optimization is restricted to a relevant subclass of downward closed sets, named anchored sets. In principle, this allows one to use these methods in arbitrarily high or even infinite dimension. Our analysis builds upon [3] which considered fixed and non-optimized downward closed multi-index sets. Potential applications of the proposed results are found in the development and analysis of efficient numerical methods for computing the solution of high-dimensional parametric or stochastic PDEs, but is not limited to this area

    Error analysis of regularized and unregularized least-squares regression on discretized function spaces

    Get PDF
    In this thesis, we analyze a variant of the least-squares regression method which operates on subsets of finite-dimensional vector spaces. In the first part, we focus on a regression problem which is constrained to a ball of finite radius in the search space. We derive an upper bound on the overall error by coupling the ball radius to the resolution of the search space. In the second part, the corresponding penalized Lagrangian dual problem is considered to establish probabilistic results on the well-posedness of the underlying minimization problem. Furthermore, we have a look at the limit case, where the penalty term vanishes and we improve on our error estimates from the first part for the special case of noiseless function reconstruction. Subsequently, our theoretical foundation is used to obtain novel convergence results for regression algorithms based on sparse grids with linear splines and Fourier polynomial spaces on hyperbolic crosses. We conclude the thesis by giving several numerical examples and comparing the observed error behavior to our theoretical results
    corecore