141 research outputs found

    Simulation of Laser Propagation in a Plasma with a Frequency Wave Equation

    Get PDF
    The aim of this work is to perform numerical simulations of the propagation of a laser in a plasma. At each time step, one has to solve a Helmholtz equation in a domain which consists in some hundreds of millions of cells. To solve this huge linear system, one uses a iterative Krylov method with a preconditioning by a separable matrix. The corresponding linear system is solved with a block cyclic reduction method. Some enlightments on the parallel implementation are also given. Lastly, numerical results are presented including some features concerning the scalability of the numerical method on a parallel architecture

    Some fast elliptic solvers on parallel architectures and their complexities

    Get PDF
    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure
    corecore