5,109 research outputs found

    Efficient Explicit Time Stepping of High Order Discontinuous Galerkin Schemes for Waves

    Full text link
    This work presents algorithms for the efficient implementation of discontinuous Galerkin methods with explicit time stepping for acoustic wave propagation on unstructured meshes of quadrilaterals or hexahedra. A crucial step towards efficiency is to evaluate operators in a matrix-free way with sum-factorization kernels. The method allows for general curved geometries and variable coefficients. Temporal discretization is carried out by low-storage explicit Runge-Kutta schemes and the arbitrary derivative (ADER) method. For ADER, we propose a flexible basis change approach that combines cheap face integrals with cell evaluation using collocated nodes and quadrature points. Additionally, a degree reduction for the optimized cell evaluation is presented to decrease the computational cost when evaluating higher order spatial derivatives as required in ADER time stepping. We analyze and compare the performance of state-of-the-art Runge-Kutta schemes and ADER time stepping with the proposed optimizations. ADER involves fewer operations and additionally reaches higher throughput by higher arithmetic intensities and hence decreases the required computational time significantly. Comparison of Runge-Kutta and ADER at their respective CFL stability limit renders ADER especially beneficial for higher orders when the Butcher barrier implies an overproportional amount of stages. Moreover, vector updates in explicit Runge--Kutta schemes are shown to take a substantial amount of the computational time due to their memory intensity

    An implicit algorithm for validated enclosures of the solutions to variational equations for ODEs

    Full text link
    We propose a new algorithm for computing validated bounds for the solutions to the first order variational equations associated to ODEs. These validated solutions are the kernel of numerics computer-assisted proofs in dynamical systems literature. The method uses a high-order Taylor method as a predictor step and an implicit method based on the Hermite-Obreshkov interpolation as a corrector step. The proposed algorithm is an improvement of the C1C^1-Lohner algorithm proposed by Zgliczy\'nski and it provides sharper bounds. As an application of the algorithm, we give a computer-assisted proof of the existence of an attractor set in the R\"ossler system, and we show that the attractor contains an invariant and uniformly hyperbolic subset on which the dynamics is chaotic, that is, conjugated to subshift of finite type with positive topological entropy.Comment: 33 pages, 11 figure

    Symmetric tensor decomposition

    Get PDF
    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables of total degree d as a sum of powers of linear forms (Waring's problem), incidence properties on secant varieties of the Veronese Variety and the representation of linear forms as a linear combination of evaluations at distinct points. Then we reformulate Sylvester's approach from the dual point of view. Exploiting this duality, we propose necessary and sufficient conditions for the existence of such a decomposition of a given rank, using the properties of Hankel (and quasi-Hankel) matrices, derived from multivariate polynomials and normal form computations. This leads to the resolution of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with these Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions, and for detecting the rank
    corecore