7 research outputs found

    Increased space-parallelism via time-simultaneous Newton-multigrid methods for nonstationary nonlinear PDE problems

    Get PDF
    We discuss how ‘parallel-in-space & simultaneous-in-time’ Newton-multigrid approaches can be designed which improve the scaling behavior of the spatial parallelism by reducing the latency costs. The idea is to solve many time steps at once and therefore solving fewer but larger systems. These large systems are reordered and interpreted as a space-only problem leading to multigrid algorithm with semi-coarsening in space and line smoothing in time direction. The smoother is further improved by embedding it as a preconditioner in a Krylov subspace method. As a prototypicalapplication, we concentrate on scalar partial differential equations (PDEs) with up to many thousands of time steps which are discretized in time, resp., space by finitedifference, resp., finite element methods. For linear PDEs, the resulting method is closely related to multigrid waveform relaxation and its theoretical framework. In our parabolic test problems the numerical behavior of this multigrid approach is robust w.r.t. the spatial and temporal grid size and the number of simultaneously treated time steps. Moreover, we illustrate how corresponding time-simultaneous fixed-point and Newton-type solvers can be derived for nonlinear nonstationary problems that require the described solution of linearizedproblems in each outer nonlinear step. As the main result, we are able to generate much larger problem sizes to be treated by a large number of cores so that the combination of the robustly scaling multigrid solvers together with a larger degree of parallelism allows a faster solution procedure for nonstationary problems

    Waveform relaxation methods for stochastic differential equations

    Get PDF
    An operator equation X = Π X + G in a Banach space 퓔 of 퓕t-adapted random elements describing an initial- or boundary value problem of a system of stochastic differential equations (SDEs) is considered. Our basic assumption is that the underlying system consists of weakly coupled subsystems. The proof of the convergence of corresponding waveform relaxation methods depends on the property that the spectral radius of an associated matrix is less than one. The entries of this matrix depend on the Lipschitz-constants of a decomposition of Π. In proving an existence result for the operator equation we show how the entries of the matrix depend on the right hand side of the stochastic differential equations. We derive conditions for the convergence under "classical" vector-valued Lipschitz-continuity of an appropriate splitting of the system of stochastic ODEs. A generalization of these key results under one-sided Lipschitz continuous and anticoercive drift coefficients of SDEs is also presented. Finally, we consider a system of SDEs with different time scales (singularly perturbed SDEs) as an illustrative example

    Waveform relaxation methods for stochastic differential equations

    Get PDF
    An operator equation X = Π X + G in a Banach space 퓔 of 퓕t-adapted random elements describing an initial- or boundary value problem of a system of stochastic differential equations (SDEs) is considered. Our basic assumption is that the underlying system consists of weakly coupled subsystems. The proof of the convergence of corresponding waveform relaxation methods depends on the property that the spectral radius of an associated matrix is less than one. The entries of this matrix depend on the Lipschitz-constants of a decomposition of Π. In proving an existence result for the operator equation we show how the entries of the matrix depend on the right hand side of the stochastic differential equations. We derive conditions for the convergence under "classical" vector-valued Lipschitz-continuity of an appropriate splitting of the system of stochastic ODEs. A generalization of these key results under one-sided Lipschitz continuous and anticoercive drift coefficients of SDEs is also presented. Finally, we consider a system of SDEs with different time scales (singularly perturbed SDEs) as an illustrative example

    Resolução da equação da onda utilizando métodos multigrid espaço-tempo

    Get PDF
    Orientador: Prof. Dr. Marcio Augusto Villela PintoCoorientador: Prof. Dr. Sebastião Romero FrancoTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Métodos Numéricos em Engenharia. Defesa : Curitiba, 10/04/2023Inclui referências: p. 109-117Resumo: Neste trabalho apresenta-se a avaliação de diferentes formas de solução para problemas modelados pela equação da onda, para os casos 1D e 2D. Utiliza-se para discretização espacial, o Método das Diferenças Finitas ponderado por um parâmetro n em diferentes estágios de tempo, para obter-se um esquema de solução implícito. Com isso, propõe-se a utilização de diferentes varreduras no tempo, a fim de gerar métodos robustos e eficientes, desde a clássica Time-Stepping, até outra varredura que envolve simultaneamente o espaço e o tempo, como Waveform Relaxation. Neste trabalho, combina-se o método dos Subdomínios com a estratégia Waveform Relaxation para reduzir as fortes oscilações que ocorrem o início do processo iterativo. Obtém-se excelentes resultados ao aplicar o método Multigrid para esta classe de problemas, já que, melhora-se muito os fatores de convergência calculados a partir das soluções aproximadas do sistema de equações resultante das discretizações. Na verificação das metodologias propostas e suas características, apresentamse simulações de propagação de ondas envolvendo problemas uni e bidimensionais, onde analisa-se os erros de discretização, ordens efetiva e aparente, fator de convergência, ordens de complexidade e tempo computacional.Abstract: In this thesis presents the evaluation of different forms of solution for probelms modeled by the wave equation, for the 1D and 2D cases. The Finite Difference Method is used for the spatial discretization, weighted by a parameter n at different time steps, in order to obtain an implicit solution. With this, it is proposed the use of different sweeps in time, in order to generate robust and efficient methods, from the classical Time-Stepping, to other less usual sweep as Waveform Relaxation. In this work, the Subdomains method is combined with the Waveform Relaxation strategy to reduce the strong oscillations that occur early in the iterative process. Excellent results are obtained when applying the Multigrid method for this class of problems, since the convergence factors calculated from the approximate solutions of the system of equations resulting from the discretizations are greatly improved. In the verification of the proposed methodologies and their respective advantages, simulations of wave propagation involving one- and two-dimensional problems are presented, where the discretization errors, effective and apparent orders, convergence factor, complexity orders and computational time are analyzed

    Nonlinear Preconditioning Methods for Optimization and Parallel-In-Time Methods for 1D Scalar Hyperbolic Partial Differential Equations

    Get PDF
    This thesis consists of two main parts, part one addressing problems from nonlinear optimization and part two based on solving systems of time dependent differential equations, with both parts describing strategies for accelerating the convergence of iterative methods. In part one we present a nonlinear preconditioning framework for use with nonlinear solvers applied to nonlinear optimization problems, motivated by a generalization of linear left preconditioning and linear preconditioning via a change of variables for minimizing quadratic objective functions. In the optimization context nonlinear preconditioning is used to generate a preconditioner direction that either replaces or supplements the gradient vector throughout the optimization algorithm. This framework is used to discuss previously developed nonlinearly preconditioned nonlinear GMRES and nonlinear conjugate gradients (NCG) algorithms, as well as to develop two new nonlinearly preconditioned quasi-Newton methods based on the limited memory Broyden and limited memory BFGS (L-BFGS) updates. We show how all of the above methods can be implemented in a manifold optimization context, with a particular emphasis on Grassmann matrix manifolds. These methods are compared by solving the optimization problems defining the canonical polyadic (CP) decomposition and Tucker higher order singular value decomposition (HOSVD) for tensors, which are formulated as minimizing approximation error in the Frobenius norm. Both of these decompositions have alternating least squares (ALS) type fixed point iterations derived from their optimization problem definitions. While these ALS type iterations may be slow to converge in practice, they can serve as efficient nonlinear preconditioners for the other optimization methods. As the Tucker HOSVD problem involves orthonormality constraints and lacks unique minimizers, the optimization algorithms are extended from Euclidean space to the manifold setting, where optimization on Grassmann manifolds can resolve both of the issues present in the HOSVD problem. The nonlinearly preconditioned methods are compared to the ALS type preconditioners and non-preconditioned NCG, L-BFGS, and a trust region algorithm using both synthetic and real life tensor data with varying noise level, the real data arising from applications in computer vision and handwritten digit recognition. Numerical results show that the nonlinearly preconditioned methods offer substantial improvements in terms of time-to-solution and robustness over state-of-the-art methods for large tensors, in cases where there are significant amounts of noise in the data, and when high accuracy results are required. In part two we apply a multigrid reduction-in-time (MGRIT) algorithm to scalar one-dimensional hyperbolic partial differential equations. This study is motivated by the observation that sequential time-stepping is an obvious computational bottleneck when attempting to implement highly concurrent algorithms, thus parallel-in-time methods are particularly desirable. Existing parallel-in-time methods have produced significant speedups for parabolic or sufficiently diffusive problems, but can have stability and convergence issues for hyperbolic or advection dominated problems. Being a multigrid method, MGRIT primarily uses temporal coarsening, but spatial coarsening can also be incorporated to produce cheaper multigrid cycles and to ensure stability conditions are satisfied on all levels for explicit time-stepping methods. We compare convergence results for the linear advection and diffusion equations, which illustrate the increased difficulty associated with solving hyperbolic problems via parallel-in-time methods. A particular issue that we address is the fact that uniform factor-two spatial coarsening may negatively affect the convergence rate for MGRIT, resulting in extremely slow convergence when the wave speed is near zero, even if only locally. This is due to a sort of anisotropy in the nodal connections, with small wave speeds resulting in spatial connections being weaker than temporal connections. Through the use of semi-algebraic mode analysis applied to the combined advection-diffusion equation we illustrate how the norm of the iteration matrix, and hence an upper bound on the rate of convergence, varies for different choices of wave speed, diffusivity coefficient, space-time grid spacing, and the inclusion or exclusion of spatial coarsening. The use of waveform relaxation multigrid on intermediate, temporally semi-coarsened grids is identified as a potential remedy for the issues introduced by spatial coarsening, with the downside of creating a more intrusive algorithm that cannot be easily combined with existing time-stepping routines for different problems. As a second, less intrusive, alternative we present an adaptive spatial coarsening strategy that prevents the slowdown observed for small local wave speeds, which is applicable for solving the variable coefficient linear advection equation and the inviscid Burgers equation using first-order explicit or implicit time-stepping methods. Serial numerical results show this method offers significant improvements over uniform coarsening and is convergent for inviscid Burgers' equation with and without shocks. Parallel scaling tests indicate that improvements over serial time-stepping strategies are possible when spatial parallelism alone saturates, and that scalability is robust for oscillatory solutions that change on the scale of the grid spacing
    corecore