17 research outputs found

    Nodal bases for the serendipity family of finite elements

    Full text link
    Using the notion of multivariate lower set interpolation, we construct nodal basis functions for the serendipity family of finite elements, of any order and any dimension. For the purpose of computation, we also show how to express these functions as linear combinations of tensor-product polynomials.Comment: Pre-print of version that will appear in Foundations of Computational Mathematic

    Application of Lagrange Interpolation Method to Solve First-Order Differential Equation Using Newton Interpolation Approach

    Get PDF
    One of the important problems in mathematics is finding the analytic solution and numerical solution of the differential equation using various methods and techniques. Most of the researchers tackled different numerical approaches to solve ordinary differential equations. These methods such as the Runge Kutta method, Euler’s method, and Taylor’s polynomial method have so many issues like difficulties in finding the solution that can lead to singularities or no solution. In this work, we considered Newton’s interpolation and Lagrange’s interpolation polynomial method (LIPM). These studies combine both Newton’s interpolation method and Lagrange method (NIPM) to solve first-order differential equations. The results obtained provide minimum approximative error. The result is supported by solving an example

    Multi-point evaluation in higher dimensions

    No full text
    In this paper, we propose efficient new algorithms for multi-dimensional multi-point evaluation and interpolation on certain subsets of so called tensor product grids. These point-sets naturally occur in the design of efficient multiplication algorithms for finite-dimensional C-algebras of the form A=C[x1,...,xn]/I, where I is finitely generated by monomials of the form x1^i1 ... xn^in; one particularly important example is the algebra of truncated power series C[x1,...xn]/(x1,...,xn)^d. Similarly to what is known for multi-point evaluation and interpolation in the univariate case, our algorithms have quasi-linear time complexity. As a known consequence, we obtain fast multiplication algorithms for algebras A of the above form

    Implicitization of curves and (hyper)surfaces using predicted support

    Get PDF
    We reduce implicitization of rational planar parametric curves and (hyper)surfaces to linear algebra, by interpolating the coefficients of the implicit equation. For predicting the implicit support, we focus on methods that exploit input and output structure in the sense of sparse (or toric) elimination theory, namely by computing the Newton polytope of the implicit polynomial, via sparse resultant theory. Our algorithm works even in the presence of base points but, in this case, the implicit equation shall be obtained as a factor of the produced polynomial. We implement our methods on Maple, and some on Matlab as well, and study their numerical stability and efficiency on several classes of curves and surfaces. We apply our approach to approximate implicitization, and quantify the accuracy of the approximate output, which turns out to be satisfactory on all tested examples; we also relate our measures to Hausdorff distance. In building a square or rectangular matrix, an important issue is (over)sampling the given curve or surface: we conclude that unitary complexes offer the best tradeoff between speed and accuracy when numerical methods are employed, namely SVD, whereas for exact kernel computation random integers is the method of choice. We compare our prototype to existing software and find that it is rather competitive
    corecore