189,167 research outputs found

    Approximating data with weighted smoothing splines

    Get PDF
    n.a. --Approximation,Residuals,Smoothing Splines,Thin Plate Splines

    Enhancing SPH using moving least-squares and radial basis functions

    Full text link
    In this paper we consider two sources of enhancement for the meshfree Lagrangian particle method smoothed particle hydrodynamics (SPH) by improving the accuracy of the particle approximation. Namely, we will consider shape functions constructed using: moving least-squares approximation (MLS); radial basis functions (RBF). Using MLS approximation is appealing because polynomial consistency of the particle approximation can be enforced. RBFs further appeal as they allow one to dispense with the smoothing-length -- the parameter in the SPH method which governs the number of particles within the support of the shape function. Currently, only ad hoc methods for choosing the smoothing-length exist. We ensure that any enhancement retains the conservative and meshfree nature of SPH. In doing so, we derive a new set of variationally-consistent hydrodynamic equations. Finally, we demonstrate the performance of the new equations on the Sod shock tube problem.Comment: 10 pages, 3 figures, In Proc. A4A5, Chester UK, Jul. 18-22 200

    Integral approximation by kernel smoothing

    Full text link
    Let (X1,,Xn)(X_1,\ldots,X_n) be an i.i.d. sequence of random variables in Rd\mathbb{R}^d, d1d\geq 1. We show that, for any function φ:RdR\varphi :\mathbb{R}^d\rightarrow\mathbb{R}, under regularity conditions, n1/2(n1i=1nφ(Xi)f^(Xi)φ(x)dx)P0,n^ {1/2}\Biggl(n^{-1}\sum_{i=1}^n\frac{\varphi(X_i)}{\widehat{f}^(X_i)}- \int \varphi(x)\,dx\Biggr)\stackrel{\mathbb{P}}{\longrightarrow}0, where f^\widehat{f} is the classical kernel estimator of the density of X1X_1. This result is striking because it speeds up traditional rates, in root nn, derived from the central limit theorem when f^=f\widehat{f}=f. Although this paper highlights some applications, we mainly address theoretical issues related to the later result. We derive upper bounds for the rate of convergence in probability. These bounds depend on the regularity of the functions φ\varphi and ff, the dimension dd and the bandwidth of the kernel estimator f^\widehat{f}. Moreover, they are shown to be accurate since they are used as renormalizing sequences in two central limit theorems each reflecting different degrees of smoothness of φ\varphi. As an application to regression modelling with random design, we provide the asymptotic normality of the estimation of the linear functionals of a regression function. As a consequence of the above result, the asymptotic variance does not depend on the regression function. Finally, we debate the choice of the bandwidth for integral approximation and we highlight the good behavior of our procedure through simulations.Comment: Published at http://dx.doi.org/10.3150/15-BEJ725 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm). arXiv admin note: text overlap with arXiv:1312.449
    corecore