668 research outputs found
A comparative study between the cubic spline and b-spline interpolation methods in free energy calculations
Numerical methods are essential in computational science, as analytic calculations for large datasets are impractical. Using numerical methods, one can approximate the problem to solve it with basic arithmetic operations. Interpolation is a commonly-used method, inter alia, constructing the value of new data points within an interval of known data points. Furthermore, polynomial interpolation with a sufficiently high degree can make the data set differentiable. One consequence of using high-degree polynomials is the oscillatory behaviour towards the endpoints, also known as Runge's Phenomenon. Spline interpolation overcomes this obstacle by connecting the data points in a piecewise fashion. However, its complex formulation requires nested iterations in higher dimensions, which is time-consuming. In addition, the calculations have to be repeated for computing each partial derivative at the data point, leading to further slowdown. The B-spline interpolation is an alternative representation of the cubic spline method, where a spline interpolation at a point could be expressed as the linear combination of piecewise basis functions. It was proposed that implementing this new formulation can accelerate many scientific computing operations involving interpolation. Nevertheless, there is a lack of detailed comparison to back up this hypothesis, especially when it comes to computing the partial derivatives. Among many scientific research fields, free energy calculations particularly stand out for their use of interpolation methods. Numerical interpolation was implemented in free energy methods for many purposes, from calculating intermediate energy states to deriving forces from free energy surfaces. The results of these calculations can provide insight into reaction mechanisms and their thermodynamic properties. The free energy methods include biased flat histogram methods, which are especially promising due to their ability to accurately construct free energy profiles at the rarely-visited regions of reaction spaces. Free Energies from Adaptive Reaction Coordinates (FEARCF) that was developed by Professor Kevin J. Naidoo has many advantages over the other flat histogram methods. iii Because of its treatment of the atoms in reactions, FEARCF makes it easier to apply interpolation methods. It implements cubic spline interpolation to derive biasing forces from the free energy surface, driving the reaction towards regions with higher energy. A major drawback of the method is the slowdown experienced in higher dimensions due to the complicated nature of the cubic spline routine. If the routine is replaced by a more straightforward B-spline interpolation, sampling and generating free energy surfaces can be accelerated. The dissertation aims to perform a comparative study between the cubic spline interpolation and B-spline interpolation methods. At first, data sets of analytic functions were used instead of numerical data to compare the accuracy and compute the percentage errors of both methods by taking the functions themselves as reference. These functions were used to evaluate the performances of the two methods at the endpoints, inflections points and regions with a steep gradient. Both interpolation methods generated identically approximated values with a percentage error below the threshold of 1%, although they both performed poorly at the endpoints and the points of inflection. Increasing the number of interpolation knots reduced the errors, however, it caused overfitting in the other regions. Although significant speed-up was not observed in the univariate interpolation, cubic spline suffered from a drastic slowdown in higher dimensions with up to 103 in 3D and 105 in 4D interpolations. The same results applied to the classical molecular dynamics simulations with FEARCF with a speed-up of up to 103 when B-spline interpolation was implemented. To conclude, the B-spline interpolation method can enhance the efficiency of the free energy calculations where cubic spline interpolation has been the currently-used method
Isogeometric treatment of large deformation contact and debonding problems with T-splines: a review
AbstractWithin a setting where the isogeometric analysis (IGA) has been successful at bringing two different research fields together, i.e. Computer Aided Design (CAD) and numerical analysis, T-spline IGA is applied in this work to frictionless contact and mode-I debonding problems between deformable bodies in the context of large deformations. Based on the concept of IGA, the smooth basis functions are adopted to describe surface geometries and approximate the numerical solutions, leading to higher accuracy in the contact integral evaluation. The isogeometric discretizations are here incorporated into an existing finite element framework by using Bézier extraction, i.e. a linear operator which maps the Bernstein polynomial basis on Bézier elements to the global isogeometric basis. A recently released commercial T-spline plugin for Rhino is herein used to build the analysis models adopted in this study.In such context, the continuum is discretized with cubic T-splines, as well as with Non Uniform Rational B-Splines (NURBS) and Lagrange polynomial elements for comparison purposes, and a Gauss-point-to-surface (GPTS) formulation is combined with the penalty method to treat the contact constraints. The purely geometric enforcement of the non-penetration condition in compression is generalized to encompass both contact and mode-I debonding of interfaces which is approached by means of cohesive zone (CZ) modeling, as commonly done by the scientific community to analyse the progressive damage of materials and interfaces. Based on these models, non-linear relationships between tractions and relative displacements are assumed. These relationships dictate both the work of separation per unit fracture surface and the peak stress that has to be reached for the crack formation. In the generalized GPTS formulation an automatic switching procedure is used to choose between cohesive and contact models, depending on the contact status. Some numerical results are first presented and compared in 2D for varying resolutions of the contact and/or cohesive zone, including frictionless sliding and cohesive debonding, all featuring the competitive accuracy and performance of T-spline IGA. The superior accuracy of T-spline interpolations with respect to NURBS and Lagrange interpolations for a given number of degrees of freedom (Dofs) is always verified. The isogeometric formulation is also extended to 3D bodies, where some examples in large deformations based on T-spline discretizations show an high smoothness of the reaction history curves
Fast space-variant elliptical filtering using box splines
The efficient realization of linear space-variant (non-convolution) filters
is a challenging computational problem in image processing. In this paper, we
demonstrate that it is possible to filter an image with a Gaussian-like
elliptic window of varying size, elongation and orientation using a fixed
number of computations per pixel. The associated algorithm, which is based on a
family of smooth compactly supported piecewise polynomials, the
radially-uniform box splines, is realized using pre-integration and local
finite-differences. The radially-uniform box splines are constructed through
the repeated convolution of a fixed number of box distributions, which have
been suitably scaled and distributed radially in an uniform fashion. The
attractive features of these box splines are their asymptotic behavior, their
simple covariance structure, and their quasi-separability. They converge to
Gaussians with the increase of their order, and are used to approximate
anisotropic Gaussians of varying covariance simply by controlling the scales of
the constituent box distributions. Based on the second feature, we develop a
technique for continuously controlling the size, elongation and orientation of
these Gaussian-like functions. Finally, the quasi-separable structure, along
with a certain scaling property of box distributions, is used to efficiently
realize the associated space-variant elliptical filtering, which requires O(1)
computations per pixel irrespective of the shape and size of the filter.Comment: 12 figures; IEEE Transactions on Image Processing, vol. 19, 201
The rational SPDE approach for Gaussian random fields with general smoothness
A popular approach for modeling and inference in spatial statistics is to
represent Gaussian random fields as solutions to stochastic partial
differential equations (SPDEs) of the form , where
is Gaussian white noise, is a second-order differential
operator, and is a parameter that determines the smoothness of .
However, this approach has been limited to the case ,
which excludes several important models and makes it necessary to keep
fixed during inference.
We propose a new method, the rational SPDE approach, which in spatial
dimension is applicable for any , and thus remedies
the mentioned limitation. The presented scheme combines a finite element
discretization with a rational approximation of the function to
approximate . For the resulting approximation, an explicit rate of
convergence to in mean-square sense is derived. Furthermore, we show that
our method has the same computational benefits as in the restricted case
. Several numerical experiments and a statistical
application are used to illustrate the accuracy of the method, and to show that
it facilitates likelihood-based inference for all model parameters including
.Comment: 28 pages, 4 figure
Unifying compactly supported and Matern covariance functions in spatial statistics
The Matern family of covariance functions has played a central role in spatial statistics for decades, being a flexible parametric class with one parameter determining the smoothness of the paths of the underlying spatial field. This paper proposes a family of spatial covariance functions, which stems from a reparameterization of the generalized Wendland family. As for the Matern case, the proposed family allows for a continuous parameterization of the smoothness of the underlying Gaussian random field, being additionally compactly supported.More importantly, we show that the proposed covariance family generalizes the Matern model which is attained as a special limit case. This implies that the (reparametrized) Generalized Wendland model is more flexible than the Matern model with an extra-parameter that allows for switching from compactly to globally supported covariance functions.Our numerical experiments elucidate the speed of convergence of the proposed model to the Matern model. We also inspect the asymptotic distribution of the maximum likelihood method when estimating the parameters of the proposed covariance models under both increasing and fixed domain asymptotics. The effectiveness of our proposal is illustrated by analyzing a georeferenced dataset of mean temperatures over a region of French, and performing a re-analysis of a large spatial point referenced dataset of yearly total precipitation anomalies. (C) 2022 Published by Elsevier Inc
- …