68 research outputs found

    Polynomial spline-approximation of Clarke's model

    Get PDF
    We investigate polynomial spline approximation of stationary random processes on a uniform grid applied to Clarke's model of time variations of path amplitudes in multipath fading channels with Doppler scattering. The integral mean square error (MSE) for optimal and interpolation splines is presented as a series of spectral moments. The optimal splines outperform the interpolation splines; however, as the sampling factor increases, the optimal and interpolation splines of even order tend to provide the same accuracy. To build such splines, the process to be approximated needs to be known for all time, which is impractical. Local splines, on the other hand, may be used where the process is known only over a finite interval. We first consider local splines with quasioptimal spline coefficients. Then, we derive optimal spline coefficients and investigate the error for different sets of samples used for calculating the spline coefficients. In practice, approximation with a low processing delay is of interest; we investigate local spline extrapolation with a zero-processing delay. The results of our investigation show that local spline approximation is attractive for implementation from viewpoints of both low processing delay and small approximation error; the error can be very close to the minimum error provided by optimal splines. Thus, local splines can be effectively used for channel estimation in multipath fast fading channels

    Toward high-quality gradient estimation on regular lattices

    Get PDF
    Abstract—In this paper, we present two methods for accurate gradient estimation from scalar field data sampled on regular lattices. The first method is based on the multidimensional Taylor series expansion of the convolution sum and allows us to specify design criteria such as compactness and approximation power. The second method is based on a Hilbert space framework and provides a minimum error solution in the form of an orthogonal projection operating between two approximation spaces. Both methods lead to discrete filters, which can be combined with continuous reconstruction kernels to yield highly accurate estimators as compared to the current state of the art. We demonstrate the advantages of our methods in the context of volume rendering of data sampled on Cartesian and Body-Centered Cubic lattices. Our results show significant qualitative and quantitative improvements for both synthetic and real data, while incurring a moderate preprocessing and storage overhead. Index Terms—Approximation theory, Taylor series expansion, normal reconstruction, orthogonal projection, body-centered cubic lattice, box splines. Ç

    Sampling—50 Years After Shannon

    Get PDF
    This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we re-interpret Shannon's sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of "shift-invariant" functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) pre-filters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., non-bandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multi-wavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned

    New optimized spline functions for interpolation on the hexagonal lattice

    Full text link

    Generalized Sampling: Stability and Performance Analysis

    Get PDF
    Generalized sampling provides a general mechanism for recovering an unknown input function f(x)∈Hf(x) \in H from the samples of the responses of m linear shift-invariant systems sampled at 1 ⁄ mth the reconstruction rate. The system can be designed to perform a projection of f(x) onto the reconstruction subspace V(φ)=span{φ(x−k)}k∈ZV(\varphi) = span \{\varphi(x - k)\} _{ k \in Z } ; for example, the family of bandlimited signals with φ(x)=sinc(x)\varphi(x) = sinc(x). This implies that the reconstruction will be perfect when the input signal is included in V(φ): the traditional framework of Papoulis' generalized sampling theory. Otherwise, one recovers a signal approximation f (x)∈V(φ) f ^{ ~ } (x) \in V(\varphi) that is consistent with f(x) in the sense that it produces the same measurements. To characterize the stability of the algorithm, we prove that the dual synthesis functions that appear in the generalized sampling reconstruction formula constitute a Riesz basis of V(φ), and we use the corresponding Riesz bounds to define the condition number of the system. We then use these results to analyze the stability of various instances of interlaced and derivative sampling. Next, we consider the issue of performance, which becomes pertinent once we have extended the applicability of the method to arbitrary input functions, that is, when H is considerably larger than V(φ), and the reconstruction is no longer exact. By deriving general error bounds for projectors, we are able to show that the generalized sampling solution is essentially equivalent to the optimal minimum error approximation (orthogonal projection), which is generally not accessible. We then perform a detailed analysis for the case in which the analysis filters are in L2 L _{ 2 } and determine all relevant bound constants explicitly. Finally, we use an interlaced sampling example to illustrate these various calculations

    Least-Squares Image Resizing Using Finite Differences

    Get PDF
    We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives

    Linear Interpolation Revitalized

    Get PDF
    We present a simple, original method to improve piecewise-linear interpolation with uniform knots: we shift the sampling knots by a fixed amount, while enforcing the interpolation property. We determine the theoretical optimal shift that maximizes the quality of our shifted linear interpolation. Surprisingly enough, this optimal value is nonzero and close to 1⁄5. We confirm our theoretical findings by performing several experiments: a cumulative rotation experiment and a zoom experiment. Both show a significant increase of the quality of the shifted method with respect to the standard one. We also observe that, in these results, we get a quality that is similar to that of the computationally more costly “high-quality” cubic convolution

    High-Quality Image Resizing Using Oblique Projection Operators

    Get PDF
    The standard interpolation approach to image resizing is to fit the original picture with a continuous model and resample the function at the desired rate. However, one can obtain more accurate results if one applies a filter prior to sampling, a fact well known from sampling theory. The optimal solution corresponds to an orthogonal projection onto the underlying continuous signal space. Unfortunately, the optimal projection prefilter is difficult to implement when sinc or high order spline functions are used. In this paper, we propose to resize the image using an oblique rather than an orthogonal projection operator in order to make use of faster, simpler, and more general algorithms. We show that we can achieve almost the same result as with the orthogonal projection provided that we use the same approximation space. The main advantage is that it becomes perfectly feasible to use higher order models (e.g., splines of degree n ≄ 3). We develop the theoretical background and present a simple and practical implementation procedure using B-splines. Our experiments show that the proposed algorithm consistently outperforms the standard interpolation methods and that it provides essentially the same performance as the optimal procedure (least squares solution) with considerably fewer computations. The method works for arbitrary scaling factors and is applicable to both image enlargement and reduction

    MOMS: Maximal-Order Interpolation of Minimal Support

    Get PDF
    We consider the problem of interpolating a signal using a linear combination of shifted versions of a compactly-supported basis function φ(x). We first give the expression of the φ's that have minimal support for a given accuracy (also known as "approximation order"). This class of functions, which we call maximal-order-minimal-support functions (MOMS), is made of linear combinations of the B-spline of same order and of its derivatives. We provide the explicit form of the MOMS that maximize the approximation accuracy when the step-size is small enough. We compute the sampling gain obtained by using these optimal basis functions over the splines of same order. We show that it is already substantial for small orders and that it further increases with the approximation order L. When L is large, this sampling gain becomes linear; more specifically, its exact asymptotic expression is (2 L ⁄ (π × e)). Since the optimal functions are continuous, but not differentiable, for even orders, and even only piecewise continuous for odd orders, our result implies that regularity has little to do with approximating performance. These theoretical findings are corroborated by experimental evidence that involves compounded rotations of images

    Interpolation Revisited

    Get PDF
    Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims
    • 

    corecore