58 research outputs found

    Approximation in certain intermediate spaces

    Get PDF
    AbstractA theorem of Bojanic gives a precise estimate on the rate of convergence of the Fourier series of a function of boundend variation. While the method of K-functionals is not directly applicable to obtain similar estimates for functions in classes intermediate to BV[−1, 1] and C[−1, 1]. we obtain such an estimate in the case of a general class of operators. The result is given in terms of an expression, which for continuous functions, is equivalent to the K-functional. As particular cases, we study the expansions in certain (general) orthogonal polynomials, Lagrange interpolation at the zeros of (general) orthogonal polynomials, and Hermite-Fejér interpolation at the zeros of generalized Jacobi polynomials. When applicable, our result (essentially) includes the previously known results, while many corollaries are new

    Diffusion polynomial frames on metric measure spaces

    Get PDF
    AbstractWe construct a multiscale tight frame based on an arbitrary orthonormal basis for the L2 space of an arbitrary sigma finite measure space. The approximation properties of the resulting multiscale are studied in the context of Besov approximation spaces, which are characterized both in terms of suitable K-functionals and the frame transforms. The only major condition required is the uniform boundedness of a summability operator. We give sufficient conditions for this to hold in the context of a very general class of metric measure spaces. The theory is illustrated using the approximation of characteristic functions of caps on a dumbell manifold, and applied to the problem of recognition of hand-written digits. Our methods outperforms comparable methods for semi-supervised learning

    Polyharmonic approximation on the sphere

    Full text link
    The purpose of this article is to provide new error estimates for a popular type of SBF approximation on the sphere: approximating by linear combinations of Green's functions of polyharmonic differential operators. We show that the LpL_p approximation order for this kind of approximation is σ\sigma for functions having LpL_p smoothness σ\sigma (for σ\sigma up to the order of the underlying differential operator, just as in univariate spline theory). This is an improvement over previous error estimates, which penalized the approximation order when measuring error in LpL_p, p>2 and held only in a restrictive setting when measuring error in LpL_p, p<2.Comment: 16 pages; revised version; to appear in Constr. Appro

    Asymptotics and zeros of Sobolev orthogonal polynomials on unbounded supports

    Get PDF
    In this paper we present a survey about analytic properties of polynomials orthogonal with respect to a weighted Sobolev inner product such that the vector of measures has an unbounded support. In particular, we are focused in the study of the asymptotic behaviour of such polynomials as well as in the distribution of their zeros. Some open problems as well as some new directions for a future research are formulated.Comment: Changed content; 34 pages, 41 reference

    Local Approximation Using Hermite Functions

    No full text
    We develop a wavelet-like representation of functions in Lp(R) based on their Fourier–Hermite coefficients; i.e., we describe an expansion of such functions where the local behavior of the terms characterize completely the local smoothness of the target function. In the case of continuous functions, a similar expansion is given based on the values of the functions at arbitrary points on the real line. In the process, we give new proofs for the localization of certain kernels, as well as for some very classical estimates such as the Markov–Bernstein inequality

    The rate of convergence of expansions in Freud polynomials

    Get PDF
    AbstractFor a function f of bounded variation on compact intervals, satisfying certain growth conditions, we estimate the rate of convergence of its expansion in a series of polynomials orthogonal on the whole real axis with respect to a weight function, now known as a Freud weight. The case where f has higher order derivatives of bounded variation is also studied. The principal techniques include the finite-infinite range inequalities due to the author and Saff, and Freud's theorems on one-sided weighted L1-approximation. Our theorem holds, in particular, when the weight function is exp( −xm), m a positive even integer

    On the tractability of multivariate integration and approximation by neural networks

    Get PDF
    AbstractLet q⩾1 be an integer, Q be a Borel subset of the Euclidean space Rq, μ be a probability measure on Q, and F be a class of real valued, μ-integrable functions on Q. The complexity problem of approximating ∫fdμ using quasi-Monte Carlo methods is to estimateEn(F,μ)≔infx1,…,xn∈Qsupf∈F∫fdμ−1n∑k=1nf(xk).The problem is said to be tractable if there exist constants c, α, β independent of q (but possibly dependent on μ and F) such that En(F,μ)⩽cqαn−β. We explore different regions (including manifolds), function classes, and measures for which this problem is tractable. Our results include tractability theorems for integration with respect to non-tensor product measures, and over unbounded and/or non-tensor product subsets, including the unit spheres of Rq with respect to various norms. We discuss applications to approximation capabilities of neural and radial basis function networks

    Weighted polynomial approximation of entire functions, II

    Get PDF
    AbstractNecessary and sufficient conditions are given for a function f defined almost everywhere on the real line to have an extension to the complex plane as an entire function of specified order and finite type. These conditions are in terms of the degree of approximation of f by polynomials in weighted Lp norms

    On a build-up polynomial frame for the detection of singularities.

    No full text

    An analysis of training and generalization errors in shallow and deep networks

    No full text
    This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216
    • …
    corecore