127 research outputs found
Approximation by Rational Functions
Making use of the Hardy-Littlewood maximal function, we give a new proof of the following theorem of Pekarski: If f\u27 is in L log L on a finite interval, then f can be approximated in the uniform norm by rational functions of degree n to an error 0(1/n) on that interval
Adaptive Finite Element Methods for Elliptic Problems with Discontinuous Coefficients
Elliptic partial differential equations (PDEs) with discontinuous diffusion
coefficients occur in application domains such as diffusions through porous
media, electro-magnetic field propagation on heterogeneous media, and diffusion
processes on rough surfaces. The standard approach to numerically treating such
problems using finite element methods is to assume that the discontinuities lie
on the boundaries of the cells in the initial triangulation. However, this does
not match applications where discontinuities occur on curves, surfaces, or
manifolds, and could even be unknown beforehand. One of the obstacles to
treating such discontinuity problems is that the usual perturbation theory for
elliptic PDEs assumes bounds for the distortion of the coefficients in the
norm and this in turn requires that the discontinuities are matched
exactly when the coefficients are approximated. We present a new approach based
on distortion of the coefficients in an norm with which
therefore does not require the exact matching of the discontinuities. We then
use this new distortion theory to formulate new adaptive finite element methods
(AFEMs) for such discontinuity problems. We show that such AFEMs are optimal in
the sense of distortion versus number of computations, and report insightful
numerical results supporting our analysis.Comment: 24 page
The Averaging Lemma
Averaging lemmas deduce smoothess of velocity averages, such as f(x) := Z f(x; v) dv; IR d ; from properties of f . A canonical example is that f is in the Sobolev space W 1=2 (L 2 (IR d )) whenever f and g(x; v) := v r x f(x; v) are in L 2 (IR d 4 The present paper shows how techniques from Harmonic Analysis such as maximal functions wavelet decompositions and interpolation can be used to prove L p versions of the averaging lemma. For example, it is shown that f; g 2 L p (IR d implies that f is in the Besov space B s p (L p (IR d )), s := min(1=p; 1=p 0 ). Examples are constructed using wavelet decompositions to show that these averaging lemmas are sharp. A deeper analysis of the averaging lemma is made near the endpoint p = 1. AMS subject classication: 35L60, 35L65, 35B65, 46B70, 46B45, 42B25. Key Words: averaging lemma, regularity, transport equations, Besov spaces 1 Introduction Averaging lemmas arise in the study of regularity of solut..
Degree of Adaptive Approximation
We obtain various estimates for the error in adaptive approximation and also establish a relationship between adaptive approximation and free-knot spline approximation
Direct and Inverse Results on Bounded Domains for Meshless Methods via Localized Bases on Manifolds
This article develops direct and inverse estimates for certain finite
dimensional spaces arising in kernel approximation. Both the direct and inverse
estimates are based on approximation spaces spanned by local Lagrange functions
which are spatially highly localized. The construction of such functions is
computationally efficient and generalizes the construction given by the authors
for restricted surface splines on . The kernels for which the
theory applies includes the Sobolev-Mat\'ern kernels for closed, compact,
connected, Riemannian manifolds.Comment: 29 pages. To appear in Festschrift for the 80th Birthday of Ian Sloa
Interpolation of Besov-Spaces
We investigate Besov spaces and their connection with dyadic spline approximation in Lp(Omega), 0 \u3c p (less than or equal to) infinity. Our main results are: the determination of the interpolation spaces between a pair of Besov spaces; an atomic decomposition for functions in a Besov space; the characterization of the class of functions which have certain prescribed degree of approximation by dyadic splines
Approximation and learning by greedy algorithms
We consider the problem of approximating a given element from a Hilbert
space by means of greedy algorithms and the application of such
procedures to the regression problem in statistical learning theory. We improve
on the existing theory of convergence rates for both the orthogonal greedy
algorithm and the relaxed greedy algorithm, as well as for the forward stepwise
projection algorithm. For all these algorithms, we prove convergence results
for a variety of function classes and not simply those that are related to the
convex hull of the dictionary. We then show how these bounds for convergence
rates lead to a new theory for the performance of greedy algorithms in
learning. In particular, we build upon the results in [IEEE Trans. Inform.
Theory 42 (1996) 2118--2132] to construct learning algorithms based on greedy
approximations which are universally consistent and provide provable
convergence rates for large classes of functions. The use of greedy algorithms
in the context of learning is very appealing since it greatly reduces the
computational burden when compared with standard model selection using general
dictionaries.Comment: Published in at http://dx.doi.org/10.1214/009053607000000631 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Error-bounds for Gaussian Quadrature and Weighted-L1 Polynomial Approximation
Error bounds for Gaussian quadrature are given in terms of the number of quadrature points and smoothness properties of the function whose integral is being approximated. An intermediate step involves a weighted-L\u27 polynomial approximation problem which is treated in a more general context than that specifically required to bound the Gaussian quadrature error
- …