1,168 research outputs found
Recommended from our members
Automatic, computer aided geometric design of free-knot, regression splines
A new algorithm for Computer Aided Geometric Design of least squares (LS) splines with variable knots, named GeDS, is presented. It is based on interpreting functional spline regression as a parametric B-spline curve, and on using the shape preserving property of its control polygon. The GeDS algorithm includes two major stages. For the first stage, an automatic adaptive, knot location algorithm is developed. By adding knots, one at a time, it sequentially "breaks" a straight line segment into pieces in order to construct a linear LS B-spline fit, which captures the "shape" of the data. A stopping rule is applied which avoids both over and under fitting and selects the number of knots for the second stage of GeDS, in which smoother, higher order (quadratic, cubic, etc.) fits are generated. The knots appropriate for the second stage are determined, according to a new knot location method, called the averaging method. It approximately preserves the linear precision property of B-spline curves and allows the attachment of smooth higher order LS B-spline fits to a control polygon, so that the shape of the linear polygon of stage one is followed. The GeDS method produces simultaneously linear, quadratic, cubic (and possibly higher order) spline fits with one and the same number of B-spline regression functions. The GeDS algorithm is very fast, since no deterministic or stochastic knot insertion/deletion and relocation search strategies are involved, neither in the first nor the second stage. Extensive numerical examples are provided, illustrating the performance of GeDS and the quality of the resulting LS spline fits. The GeDS procedure is compared with other existing variable knot spline methods and smoothing techniques, such as SARS, HAS, MDL, AGS methods and is shown to produce models with fewer parameters but with similar goodness of fit characteristics, and visual quality
Error analysis for local coarsening in univariate spline spaces
In this article we analyze the error produced by the removal of an arbitrary
knot from a spline function. When a knot has multiplicity greater than one,
this implies a reduction of its multiplicity by one unit. In particular, we
deduce a very simple formula to compute the error in terms of some neighboring
knots and a few control points of the considered spline. Furthermore, we show
precisely how this error is related to the jump of a derivative of the spline
at the knot. We then use the developed theory to propose efficient and very
low-cost local error indicators and adaptive coarsening algorithms. Finally, we
present some numerical experiments to illustrate their performance and show
some applications
Reconstruction from non-uniform samples: A direct, variational approach in shift-invariant spaces
International audienceWe propose a new approach for signal reconstruction from non-uniform samples, without any constraint on their locations. We look for a function that minimizes a classical regularized least-squares criterion, but with the additional constraint that the solution lies in a chosen linear shift-invariant space--typically, a spline space. In comparison with a pure variational treatment involving radial basis functions, our approach is resolution de- pendent; an important feature for many applications. Moreover, the solution can be computed exactly by a fast non-iterative algorithm, that exploits at best the particular structure of the problem
Curve fitting and modeling with splines using statistical variable selection techniques
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use
Focused information criterion and model averaging for generalized additive partial linear models
We study model selection and model averaging in generalized additive partial
linear models (GAPLMs). Polynomial spline is used to approximate nonparametric
functions. The corresponding estimators of the linear parameters are shown to
be asymptotically normal. We then develop a focused information criterion (FIC)
and a frequentist model average (FMA) estimator on the basis of the
quasi-likelihood principle and examine theoretical properties of the FIC and
FMA. The major advantages of the proposed procedures over the existing ones are
their computational expediency and theoretical reliability. Simulation
experiments have provided evidence of the superiority of the proposed
procedures. The approach is further applied to a real-world data example.Comment: Published in at http://dx.doi.org/10.1214/10-AOS832 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …