5,256 research outputs found
An application of interpolating scaling functions to wave packet propagation
Wave packet propagation in the basis of interpolating scaling functions (ISF)
is studied. The ISF are well known in the multiresolution analysis based on
spline biorthogonal wavelets. The ISF form a cardinal basis set corresponding
to an equidistantly spaced grid. They have compact support of the size
determined by the underlying interpolating polynomial that is used to generate
ISF. In this basis the potential energy matrix is diagonal and the kinetic
energy matrix is sparse and, in the 1D case, has a band-diagonal structure. An
important feature of the basis is that matrix elements of a Hamiltonian are
exactly computed by means of simple algebraic transformations efficiently
implemented numerically. Therefore the number of grid points and the order of
the underlying interpolating polynomial can easily be varied allowing one to
approach the accuracy of pseudospectral methods in a regular manner, similar to
high order finite difference methods. The results of numerical simulations of
an H+H_2 collinear collision show that the ISF provide one with an accurate and
efficient representation for use in the wave packet propagation method.Comment: plain Latex, 11 pages, 4 figures attached in the JPEG forma
Asymptotic properties and approximation of Bayesian logspline density estimators for communication-free parallel computing methods
In this article we perform an asymptotic analysis of Bayesian parallel
density estimators which are based on logspline density estimation. The
parallel estimator we introduce is in the spirit of a kernel density estimator
introduced in recent studies. We provide a numerical procedure that produces
the density estimator itself in place of the sampling algorithm. We then derive
an error bound for the mean integrated squared error for the full data
posterior density estimator. We also investigate the parameters that arise from
logspline density estimation and the numerical approximation procedure. Our
investigation identifies specific choices of parameters for logspline density
estimation that result in the error bound scaling appropriately in relation to
these choices.Comment: 33 pages, 11 figure
Quantitative analysis of the reconstruction performance of interpolants
The analysis presented provides a quantitative measure of the reconstruction or interpolation performance of linear, shift-invariant interpolants. The performance criterion is the mean square error of the difference between the sampled and reconstructed functions. The analysis is applicable to reconstruction algorithms used in image processing and to many types of splines used in numerical analysis and computer graphics. When formulated in the frequency domain, the mean square error clearly separates the contribution of the interpolation method from the contribution of the sampled data. The equations provide a rational basis for selecting an optimal interpolant; that is, one which minimizes the mean square error. The analysis has been applied to a selection of frequently used data splines and reconstruction algorithms: parametric cubic and quintic Hermite splines, exponential and nu splines (including the special case of the cubic spline), parametric cubic convolution, Keys' fourth-order cubic, and a cubic with a discontinuous first derivative. The emphasis in this paper is on the image-dependent case in which no a priori knowledge of the frequency spectrum of the sampled function is assumed
A class of piecewise cubic interpolatory polynomials
A new class of C1 piecewise—cubic interpolatory polynomials is defined, by generalizing the definition of cubic X-splines given recently by Clenshaw and Negus (1978). It is shown that this new
class contains a number of interpolatory functions which present practical advantages, when compared with the conventional cubic
spline
Monotonicity preserving approximation of multivariate scattered data
This paper describes a new method of monotone interpolation and smoothing of multivariate scattered data. It is based on the assumption that the function to be approximated is Lipschitz continuous. The method provides the optimal approximation in the worst case scenario and tight error bounds. Smoothing of noisy data subject to monotonicity constraints is converted into a quadratic programming problem. Estimation of the unknown Lipschitz constant from the data by sample splitting and cross-validation is described. Extension of the method for locally Lipschitz functions is presented.<br /
An introduction to regular splines and their application for initial value problems of ordinary differential equations
This report describes an application of the general method of integrating initial value problems by means of regular splines for equations with movable singularities. By defining the families of functions that make up the regular splines such that they closely resemble the behaviour of the solutions of the differential equation, it is possible to trace the location of the singularities very precisely.
To demonstrate this we treat Riccati differential equations. These are known to possess solutions with poles, usually of the first order. This type of differential equation or system arises in describing chemical or biological processes or more general control processes.
To make the report self contained it starts with an introduction to regular splines and develops the algebraic tools for the manipulation of rational splines. After the description of the integration procedure, the asymptotic behaviour of the systematic error is investigated. An example exhibits the results obtained from the program given in Appendix A. Then Riccati equations are introduced and methods for the determination of the singularities are developed. These methods are tested numerically with several examples. The results are given in Appendix B
Non-equispaced B-spline wavelets
This paper has three main contributions. The first is the construction of
wavelet transforms from B-spline scaling functions defined on a grid of
non-equispaced knots. The new construction extends the equispaced,
biorthogonal, compactly supported Cohen-Daubechies-Feauveau wavelets. The new
construction is based on the factorisation of wavelet transforms into lifting
steps. The second and third contributions are new insights on how to use these
and other wavelets in statistical applications. The second contribution is
related to the bias of a wavelet representation. It is investigated how the
fine scaling coefficients should be derived from the observations. In the
context of equispaced data, it is common practice to simply take the
observations as fine scale coefficients. It is argued in this paper that this
is not acceptable for non-interpolating wavelets on non-equidistant data.
Finally, the third contribution is the study of the variance in a
non-orthogonal wavelet transform in a new framework, replacing the numerical
condition as a measure for non-orthogonality. By controlling the variances of
the reconstruction from the wavelet coefficients, the new framework allows us
to design wavelet transforms on irregular point sets with a focus on their use
for smoothing or other applications in statistics.Comment: 42 pages, 2 figure
- …