547 research outputs found
Interpolation by Hankel Translates of a Basis Function: Inversion Formulas and Polynomial Bounds
For μ≥−1/2, the authors have developed elsewhere a scheme for interpolation by Hankel translates of a basis function Φ in certain spaces of continuous functions Yn (n∈ℕ) depending on a weight w. The functions Φ and w are connected through the distributional identity t4n(hμ′Φ)(t)=1/w(t), where hμ′ denotes the generalized Hankel transform of order μ. In this paper, we use the projection operators associated with an appropriate direct sum decomposition of the Zemanian space ℋμ in order to derive explicit representations of the derivatives SμmΦ and their Hankel transforms, the former ones being valid when m∈ℤ+ is restricted to a suitable interval for which SμmΦ is continuous. Here, Sμm denotes the mth iterate of the Bessel differential operator Sμ if m∈ℕ, while Sμ0 is the identity operator. These formulas, which can be regarded as inverses of generalizations of the equation (hμ′Φ)(t)=1/t4nw(t), will allow us to get some polynomial bounds for such derivatives. Corresponding results are obtained for the members of the interpolation space Yn
Linear-response theory and lattice dynamics: a muffin-tin orbital approach
A detailed description of a method for calculating static linear-response
functions in the problem of lattice dynamics is presented. The method is based
on density functional theory and it uses linear muffin-tin orbitals as a basis
for representing first-order corrections to the one-electron wave functions. As
an application we calculate phonon dispersions in Si and NbC and find good
agreement with experiments.Comment: 18 pages, Revtex, 2 ps figures, uuencoded, gzip'ed, tar'ed fil
Fast multi-dimensional scattered data approximation with Neumann boundary conditions
An important problem in applications is the approximation of a function
from a finite set of randomly scattered data . A common and powerful
approach is to construct a trigonometric least squares approximation based on
the set of exponentials . This leads to fast numerical
algorithms, but suffers from disturbing boundary effects due to the underlying
periodicity assumption on the data, an assumption that is rarely satisfied in
practice. To overcome this drawback we impose Neumann boundary conditions on
the data. This implies the use of cosine polynomials as basis
functions. We show that scattered data approximation using cosine polynomials
leads to a least squares problem involving certain Toeplitz+Hankel matrices. We
derive estimates on the condition number of these matrices. Unlike other
Toeplitz+Hankel matrices, the Toeplitz+Hankel matrices arising in our context
cannot be diagonalized by the discrete cosine transform, but they still allow a
fast matrix-vector multiplication via DCT which gives rise to fast conjugate
gradient type algorithms. We show how the results can be generalized to higher
dimensions. Finally we demonstrate the performance of the proposed method by
applying it to a two-dimensional geophysical scattered data problem
Parametric spectral analysis: scale and shift
We introduce the paradigm of dilation and translation for use in the spectral
analysis of complex-valued univariate or multivariate data. The new procedure
stems from a search on how to solve ambiguity problems in this analysis, such
as aliasing because of too coarsely sampled data, or collisions in projected
data, which may be solved by a translation of the sampling locations.
In Section 2 both dilation and translation are first presented for the
classical one-dimensional exponential analysis. In the subsequent Sections 3--7
the paradigm is extended to more functions, among which the trigonometric
functions cosine, sine, the hyperbolic cosine and sine functions, the Chebyshev
and spread polynomials, the sinc, gamma and Gaussian function, and several
multivariate versions of all of the above.
Each of these function classes needs a tailored approach, making optimal use
of the properties of the base function used in the considered sparse
interpolation problem. With each of the extensions a structured linear matrix
pencil is associated, immediately leading to a computational scheme for the
spectral analysis, involving a generalized eigenvalue problem and several
structured linear systems.
In Section 8 we illustrate the new methods in several examples: fixed width
Gaussian distribution fitting, sparse cardinal sine or sinc interpolation, and
lacunary or supersparse Chebyshev polynomial interpolation
The exponentially convergent trapezoidal rule
It is well known that the trapezoidal rule converges geometrically when applied to analytic functions on periodic intervals or the real line. The mathematics and history of this phenomenon are reviewed and it is shown that far from being a curiosity, it is linked with computational methods all across scientific computing, including algorithms related to inverse Laplace transforms, special functions, complex analysis, rational approximation, integral equations, and the computation of functions and eigenvalues of matrices and operators
- …