52,455 research outputs found

    Simulation of Harmonic Oscillators on the Lattice

    Full text link
    [EN] This work deals with the simulation of a two¿dimensional ideal lattice having simple tetragonal geometry. The harmonic character of the oscillators give rise to a system of second¿order linear differential equations, which can be recast into matrix form. The explicit solutions which govern the dynamics of this system can be expressed in terms of matrix trigonometric functions. For the derivation we employ the Lagrangian formalism to determine the correct solutions, which extremize the underlying action of the system. In the numerical evaluation we develop diverse state¿of¿the¿art algorithms which efficiently tackle equations with matrix sine and cosine functions. For this purpose, we introduce two special series related to trigonometric functions. They provide approximate solutions of the system through a suitable combination. For the final computation an algorithm based on Taylor expansion with forward and backward error analysis for computing those series had to be devised. We also implement several MATLAB programs which simulate and visualize the two¿dimensional lattice and check its energy conservation.This work has been supported by the Spanish Ministerio de Economia y Competitividad, the European Regional Development Fund (ERDF) under grant TIN2017-89314-P, and the Programa de Apoyo a la Investigacion y Desarrollo 2018 (PAID-06-18) of the Universitat Politecnica de Valencia under grant SP20180016.Tung, MM.; Ibáñez González, JJ.; Defez Candel, E.; Sastre, J. (2020). Simulation of Harmonic Oscillators on the Lattice. Mathematical Methods in the Applied Sciences. 43(14):8237-8252. https://doi.org/10.1002/mma.6510S823782524314Dehghan, M., & Hajarian, M. (2009). Determination of a matrix function using the divided difference method of Newton and the interpolation technique of Hermite. Journal of Computational and Applied Mathematics, 231(1), 67-81. doi:10.1016/j.cam.2009.01.021Dehghan, M., & Hajarian, M. (2010). Computing matrix functions using mixed interpolation methods. Mathematical and Computer Modelling, 52(5-6), 826-836. doi:10.1016/j.mcm.2010.05.013Kazem, S., & Dehghan, M. (2017). Application of finite difference method of lines on the heat equation. Numerical Methods for Partial Differential Equations, 34(2), 626-660. doi:10.1002/num.22218Kazem, S., & Dehghan, M. (2018). Semi-analytical solution for time-fractional diffusion equation based on finite difference method of lines (MOL). Engineering with Computers, 35(1), 229-241. doi:10.1007/s00366-018-0595-5Paterson, M. S., & Stockmeyer, L. J. (1973). On the Number of Nonscalar Multiplications Necessary to Evaluate Polynomials. SIAM Journal on Computing, 2(1), 60-66. doi:10.1137/0202007Sastre, J., Ibáñez, J., Defez, E., & Ruiz, P. (2011). Efficient orthogonal matrix polynomial based method for computing matrix exponential. Applied Mathematics and Computation, 217(14), 6451-6463. doi:10.1016/j.amc.2011.01.004Higham, N. J. (2008). Functions of Matrices. doi:10.1137/1.9780898717778Sastre, J., Ibáñez, J., Defez, E., & Ruiz, P. (2011). Accurate matrix exponential computation to solve coupled differential models in engineering. Mathematical and Computer Modelling, 54(7-8), 1835-1840. doi:10.1016/j.mcm.2010.12.049Serbin, S. M., & Blalock, S. A. (1980). An Algorithm for Computing the Matrix Cosine. SIAM Journal on Scientific and Statistical Computing, 1(2), 198-204. doi:10.1137/0901013Ruiz, P., Sastre, J., Ibáñez, J., & Defez, E. (2016). High performance computing of the matrix exponential. Journal of Computational and Applied Mathematics, 291, 370-379. doi:10.1016/j.cam.2015.04.001Higham, N. J. (1988). FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation. ACM Transactions on Mathematical Software, 14(4), 381-396. doi:10.1145/50063.21438

    Implicitization of curves and (hyper)surfaces using predicted support

    Get PDF
    We reduce implicitization of rational planar parametric curves and (hyper)surfaces to linear algebra, by interpolating the coefficients of the implicit equation. For predicting the implicit support, we focus on methods that exploit input and output structure in the sense of sparse (or toric) elimination theory, namely by computing the Newton polytope of the implicit polynomial, via sparse resultant theory. Our algorithm works even in the presence of base points but, in this case, the implicit equation shall be obtained as a factor of the produced polynomial. We implement our methods on Maple, and some on Matlab as well, and study their numerical stability and efficiency on several classes of curves and surfaces. We apply our approach to approximate implicitization, and quantify the accuracy of the approximate output, which turns out to be satisfactory on all tested examples; we also relate our measures to Hausdorff distance. In building a square or rectangular matrix, an important issue is (over)sampling the given curve or surface: we conclude that unitary complexes offer the best tradeoff between speed and accuracy when numerical methods are employed, namely SVD, whereas for exact kernel computation random integers is the method of choice. We compare our prototype to existing software and find that it is rather competitive

    Spectral tensor-train decomposition

    Get PDF
    The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT decomposition and analyze its properties. We obtain results on the convergence of the decomposition, revealing links between the regularity of the function, the dimension of the input space, and the TT ranks. We also show that the regularity of the target function is preserved by the univariate functions (i.e., the "cores") comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting \textit{spectral tensor-train decomposition} combines the favorable dimension-scaling of the TT decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of tensors resulting from suitable discretizations of the target function. We assess the performance of the method on a range of numerical examples: a modifed set of Genz functions with dimension up to 100100, and functions with mixed Fourier modes or with local features. We observe significant improvements in performance over an anisotropic adaptive Smolyak approach. The method is also used to approximate the solution of an elliptic PDE with random input data. The open source software and examples presented in this work are available online.Comment: 33 pages, 19 figure
    corecore