23 research outputs found
On the constrained mock-Chebyshev least-squares
The algebraic polynomial interpolation on uniformly distributed nodes is
affected by the Runge phenomenon, also when the function to be interpolated is
analytic. Among all techniques that have been proposed to defeat this
phenomenon, there is the mock-Chebyshev interpolation which is an interpolation
made on a subset of the given nodes whose elements mimic as well as possible
the Chebyshev-Lobatto points. In this work we use the simultaneous
approximation theory to combine the previous technique with a polynomial
regression in order to increase the accuracy of the approximation of a given
analytic function. We give indications on how to select the degree of the
simultaneous regression in order to obtain polynomial approximant good in the
uniform norm and provide a sufficient condition to improve, in that norm, the
accuracy of the mock-Chebyshev interpolation with a simultaneous regression.
Numerical results are provided.Comment: 17 pages, 9 figure
Polynomial approximation of derivatives by the constrained mock-Chebyshev least squares operator
The constrained mock-Chebyshev least squares operator is a linear
approximation operator based on an equispaced grid of points. Like other
polynomial or rational approximation methods, it was recently introduced in
order to defeat the Runge phenomenon that occurs when using polynomial
interpolation on large sets of equally spaced points. The idea is to improve
the mock-Chebyshev subset interpolation, where the considered function is
interpolated only on a proper subset of the uniform grid, formed by nodes that
mimic the behavior of Chebyshev--Lobatto nodes. In the mock-Chebyshev subset
interpolation all remaining nodes are discarded, while in the constrained
mock-Chebyshev least squares interpolation they are used in a simultaneous
regression, with the aim to further improving the accuracy of the approximation
provided by the mock-Chebyshev subset interpolation. The goal of this paper is
two-fold. We discuss some theoretical aspects of the constrained mock-Chebyshev
least squares operator and present new results. In particular, we introduce
explicit representations of the error and its derivatives. Moreover, for a
sufficiently smooth function in , we present a method for
approximating the successive derivatives of at a point , based
on the constrained mock-Chebyshev least squares operator and provide estimates
for these approximations. Numerical tests demonstrate the effectiveness of the
proposed method.Comment: 17 pages, 23 figure
Product integration rules by the constrained mock-Chebyshev least squares operator
In this paper we consider the problem of the approximation of definite integrals on finite intervals for integrand functions showing some kind of "pathological" behavior, e.g. "nearly" singular functions, highly oscillating functions, weakly singular functions, etc. In particular, we introduce and study a product rule based on equally spaced nodes and on the constrained mock-Chebyshev least squares operator. Like other polynomial or rational approximation methods, this operator was recently introduced in order to defeat the Runge phenomenon that occurs when using polynomial interpolation on large sets of equally spaced points. Unlike methods based on piecewise approximation functions, mainly used in the case of equally spaced nodes, our product rule offers a high efficiency, with performances slightly lower than those of global methods based on orthogonal polynomials in the same spaces of functions. We study the convergence of the product rule and provide error estimates in subspaces of continuous functions. We test the effectiveness of the formula by means of several examples, which confirm the theoretical estimates
On the numerical stability of Fourier extensions
An effective means to approximate an analytic, nonperiodic function on a
bounded interval is by using a Fourier series on a larger domain. When
constructed appropriately, this so-called Fourier extension is known to
converge geometrically fast in the truncation parameter. Unfortunately,
computing a Fourier extension requires solving an ill-conditioned linear
system, and hence one might expect such rapid convergence to be destroyed when
carrying out computations in finite precision. The purpose of this paper is to
show that this is not the case. Specifically, we show that Fourier extensions
are actually numerically stable when implemented in finite arithmetic, and
achieve a convergence rate that is at least superalgebraic. Thus, in this
instance, ill-conditioning of the linear system does not prohibit a good
approximation.
In the second part of this paper we consider the issue of computing Fourier
extensions from equispaced data. A result of Platte, Trefethen & Kuijlaars
states that no method for this problem can be both numerically stable and
exponentially convergent. We explain how Fourier extensions relate to this
theoretical barrier, and demonstrate that they are particularly well suited for
this problem: namely, they obtain at least superalgebraic convergence in a
numerically stable manner
Stability inequalities for Lebesgue constants via Markov-like inequalities
We prove that L^infty-norming sets for finite-dimensional multivariatefunction spaces on compact sets are stable under small perturbations. This implies stability of interpolation operator norms (Lebesgue constants), in spaces of algebraic and trigonometric polynomials
Algorithmes hiérarchiques rapides pour la génération de champs aléatoires Gaussiens.
Low-rank approximation (LRA) techniques have become crucial tools in scientific computing in order to reduce the cost of storing matrices and compute usual matrix operations. Since standard techniques like the SVD do not scale well with the problem size N, there has been recently a growing interest for alternative methods like randomized LRAs. These methods are usually cheap, easy to implement and optimize, since they involve only very basic operations like Matrix Vector Products (MVPs) or orthogonalizations. More precisely, randomization allows for reducing the cubic cost required to perform a standard matrix factorization to the quadratic cost required to apply a few MVPs, namely O(r × N^2) operations where r is the numerical rank of the matrix. First of all, we present a new efficient algorithm for performing MVPs in O(N) operations called the Uniform FMM (ufmm). It is based on a hierarchical (data sparse) representation of a kernel matrix combined with polynomial interpolation of the kernel on equispaced grids. The latter feature allows for FFT-acceleration and consequently reduce both running time and memory footprint but has implications on accuracy and stability. Then, the ufmm is used to speed-up the MVPs involved in the randomized SVD, thus reducing its cost to O(r^2 × N) and exhibiting very competitive performance when the distribution of points is large and highly heterogeneous. Finally, we make use of this algorithm to efficiently generate spatially correlated multivariate Gaussian random variables.Les approximations de rang faible (LRA) sont devenus des outils fondamentaux en calcul scientifique en vue de réduire les coûts liés au stockage et aux opérations matricielles. Le coût des méthodes standards comme la SVD croît très rapidement avec la taille du problème N, c'est pourquoi des méthodes alternatives comme les approches aléatoires (i.e. basées sur la projection ou la sélection de colonnes/l'échantillonnage aléatoire) se popularisent. Ces méthodes sont en général peu coûteuses et facile à implémenter et optimiser, car elles ne mettent en oeuvre que des opérations matricielles simples comme des produits ou des orthogonalisations. Plus précisemment, les LRA aléatoires permettent de réduire le coût cubique en N des méthodes standards de factorisation au coût quadratique nécessaire à la réalisation de quelques produits matrices vecteurs, i.e., O(r × N^2) opérations où r est le rang numérique de la matrice. Dans un premier temps, nous présentons un algorithme efficace pour réaliser des MVPs en O(N) opérations, que nous appelons Uniform FMM (ufmm). Il est basé sur la combinaison d'une représentation hiérachique d'une matrice noyau et l'interpolation polynomiale du noyau associé sur une grille régulière (uniforme). Cette dernière propriété permet une accélération par FFT réduisant ainsi le temps de calcul et la consommation mémoire mais a des répercussions sur la précision et la stabilité de l'algorithme. Ensuite, la ufmm est utilisée pour accélerer les MVPs intervenants dans la SVD aléatoire (i.e., SVD par projection aléatoire) diminuant son coût asymptotique à O(r^2 × N). La méthode est particulièrement compétitive pour des distributions de points hétérogènes. Enfin, nous utilisons cet algorithme pour générer des champs de variables Gaussiens de manière efficace