94 research outputs found

    Schnelle Löser für Partielle Differentialgleichungen

    Get PDF
    This workshop was well attended by 52 participants with broad geographic representation from 11 countries and 3 continents. It was a nice blend of researchers with various backgrounds

    A note on parallel preconditioning for the all-at-once solution of Riesz fractional diffusion equations

    Full text link
    The pp-step backwards difference formula (BDF) for solving the system of ODEs can result in a kind of all-at-once linear systems, which are solved via the parallel-in-time preconditioned Krylov subspace solvers (see McDonald, Pestana, and Wathen [SIAM J. Sci. Comput., 40(2) (2018): A1012-A1033] and Lin and Ng [arXiv:2002.01108, 17 pages]. However, these studies ignored that the pp-step BDF (p2p\geq 2) is not selfstarting, when they are exploited to solve time-dependent PDEs. In this note, we focus on the 2-step BDF which is often superior to the trapezoidal rule for solving the Riesz fractional diffusion equations, but its resultant all-at-once discretized system is a block triangular Toeplitz system with a low-rank perturbation. Meanwhile, we first give an estimation of the condition number of the all-at-once systems and then adapt the previous work to construct two block circulant (BC) preconditioners. Both the invertibility of these two BC preconditioners and the eigenvalue distributions of preconditioned matrices are discussed in details. The efficient implementation of these BC preconditioners is also presented especially for handling the computation of dense structured Jacobi matrices. Finally, numerical experiments involving both the one- and two-dimensional Riesz fractional diffusion equations are reported to support our theoretical findings.Comment: 18 pages. 2 figures. 6 Table. Tech. Rep.: Institute of Mathematics, Southwestern University of Finance and Economics. Revised-1: refine/shorten the contexts and correct some typos; Revised-2: correct some reference

    A fluctuating boundary integral method for Brownian suspensions

    Full text link
    We present a fluctuating boundary integral method (FBIM) for overdamped Brownian Dynamics (BD) of two-dimensional periodic suspensions of rigid particles of complex shape immersed in a Stokes fluid. We develop a novel approach for generating Brownian displacements that arise in response to the thermal fluctuations in the fluid. Our approach relies on a first-kind boundary integral formulation of a mobility problem in which a random surface velocity is prescribed on the particle surface, with zero mean and covariance proportional to the Green's function for Stokes flow (Stokeslet). This approach yields an algorithm that scales linearly in the number of particles for both deterministic and stochastic dynamics, handles particles of complex shape, achieves high order of accuracy, and can be generalized to three dimensions and other boundary conditions. We show that Brownian displacements generated by our method obey the discrete fluctuation-dissipation balance relation (DFDB). Based on a recently-developed Positively Split Ewald method [A. M. Fiore, F. Balboa Usabiaga, A. Donev and J. W. Swan, J. Chem. Phys., 146, 124116, 2017], near-field contributions to the Brownian displacements are efficiently approximated by iterative methods in real space, while far-field contributions are rapidly generated by fast Fourier-space methods based on fluctuating hydrodynamics. FBIM provides the key ingredient for time integration of the overdamped Langevin equations for Brownian suspensions of rigid particles. We demonstrate that FBIM obeys DFDB by performing equilibrium BD simulations of suspensions of starfish-shaped bodies using a random finite difference temporal integrator.Comment: Submitted to J. Comp. Phy

    Dense and sparse parallel linear algebra algorithms on graphics processing units

    Full text link
    Una línea de desarrollo seguida en el campo de la supercomputación es el uso de procesadores de propósito específico para acelerar determinados tipos de cálculo. En esta tesis estudiamos el uso de tarjetas gráficas como aceleradores de la computación y lo aplicamos al ámbito del álgebra lineal. En particular trabajamos con la biblioteca SLEPc para resolver problemas de cálculo de autovalores en matrices de gran dimensión, y para aplicar funciones de matrices en los cálculos de aplicaciones científicas. SLEPc es una biblioteca paralela que se basa en el estándar MPI y está desarrollada con la premisa de ser escalable, esto es, de permitir resolver problemas más grandes al aumentar las unidades de procesado. El problema lineal de autovalores, Ax = lambda x en su forma estándar, lo abordamos con el uso de técnicas iterativas, en concreto con métodos de Krylov, con los que calculamos una pequeña porción del espectro de autovalores. Este tipo de algoritmos se basa en generar un subespacio de tamaño reducido (m) en el que proyectar el problema de gran dimensión (n), siendo m << n. Una vez se ha proyectado el problema, se resuelve este mediante métodos directos, que nos proporcionan aproximaciones a los autovalores del problema inicial que queríamos resolver. Las operaciones que se utilizan en la expansión del subespacio varían en función de si los autovalores deseados están en el exterior o en el interior del espectro. En caso de buscar autovalores en el exterior del espectro, la expansión se hace mediante multiplicaciones matriz-vector. Esta operación la realizamos en la GPU, bien mediante el uso de bibliotecas o mediante la creación de funciones que aprovechan la estructura de la matriz. En caso de autovalores en el interior del espectro, la expansión requiere resolver sistemas de ecuaciones lineales. En esta tesis implementamos varios algoritmos para la resolución de sistemas de ecuaciones lineales para el caso específico de matrices con estructura tridiagonal a bloques, que se ejecutan en GPU. En el cálculo de las funciones de matrices hemos de diferenciar entre la aplicación directa de una función sobre una matriz, f(A), y la aplicación de la acción de una función de matriz sobre un vector, f(A)b. El primer caso implica un cálculo denso que limita el tamaño del problema. El segundo permite trabajar con matrices dispersas grandes, y para resolverlo también hacemos uso de métodos de Krylov. La expansión del subespacio se hace mediante multiplicaciones matriz-vector, y hacemos uso de GPUs de la misma forma que al resolver autovalores. En este caso el problema proyectado comienza siendo de tamaño m, pero se incrementa en m en cada reinicio del método. La resolución del problema proyectado se hace aplicando una función de matriz de forma directa. Nosotros hemos implementado varios algoritmos para calcular las funciones de matrices raíz cuadrada y exponencial, en las que el uso de GPUs permite acelerar el cálculo.One line of development followed in the field of supercomputing is the use of specific purpose processors to speed up certain types of computations. In this thesis we study the use of graphics processing units as computer accelerators and apply it to the field of linear algebra. In particular, we work with the SLEPc library to solve large scale eigenvalue problems, and to apply matrix functions in scientific applications. SLEPc is a parallel library based on the MPI standard and is developed with the premise of being scalable, i.e. to allow solving larger problems by increasing the processing units. We address the linear eigenvalue problem, Ax = lambda x in its standard form, using iterative techniques, in particular with Krylov's methods, with which we calculate a small portion of the eigenvalue spectrum. This type of algorithms is based on generating a subspace of reduced size (m) in which to project the large dimension problem (n), being m << n. Once the problem has been projected, it is solved by direct methods, which provide us with approximations of the eigenvalues of the initial problem we wanted to solve. The operations used in the expansion of the subspace vary depending on whether the desired eigenvalues are from the exterior or from the interior of the spectrum. In the case of searching for exterior eigenvalues, the expansion is done by matrix-vector multiplications. We do this on the GPU, either by using libraries or by creating functions that take advantage of the structure of the matrix. In the case of eigenvalues from the interior of the spectrum, the expansion requires solving linear systems of equations. In this thesis we implemented several algorithms to solve linear systems of equations for the specific case of matrices with a block-tridiagonal structure, that are run on GPU. In the computation of matrix functions we have to distinguish between the direct application of a matrix function, f(A), and the action of a matrix function on a vector, f(A)b. The first case involves a dense computation that limits the size of the problem. The second allows us to work with large sparse matrices, and to solve it we also make use of Krylov's methods. The expansion of subspace is done by matrix-vector multiplication, and we use GPUs in the same way as when solving eigenvalues. In this case the projected problem starts being of size m, but it is increased by m on each restart of the method. The solution of the projected problem is done by directly applying a matrix function. We have implemented several algorithms to compute the square root and the exponential matrix functions, in which the use of GPUs allows us to speed up the computation.Una línia de desenvolupament seguida en el camp de la supercomputació és l'ús de processadors de propòsit específic per a accelerar determinats tipus de càlcul. En aquesta tesi estudiem l'ús de targetes gràfiques com a acceleradors de la computació i ho apliquem a l'àmbit de l'àlgebra lineal. En particular treballem amb la biblioteca SLEPc per a resoldre problemes de càlcul d'autovalors en matrius de gran dimensió, i per a aplicar funcions de matrius en els càlculs d'aplicacions científiques. SLEPc és una biblioteca paral·lela que es basa en l'estàndard MPI i està desenvolupada amb la premissa de ser escalable, açò és, de permetre resoldre problemes més grans en augmentar les unitats de processament. El problema lineal d'autovalors, Ax = lambda x en la seua forma estàndard, ho abordem amb l'ús de tècniques iteratives, en concret amb mètodes de Krylov, amb els quals calculem una xicoteta porció de l'espectre d'autovalors. Aquest tipus d'algorismes es basa a generar un subespai de grandària reduïda (m) en el qual projectar el problema de gran dimensió (n), sent m << n. Una vegada s'ha projectat el problema, es resol aquest mitjançant mètodes directes, que ens proporcionen aproximacions als autovalors del problema inicial que volíem resoldre. Les operacions que s'utilitzen en l'expansió del subespai varien en funció de si els autovalors desitjats estan en l'exterior o a l'interior de l'espectre. En cas de cercar autovalors en l'exterior de l'espectre, l'expansió es fa mitjançant multiplicacions matriu-vector. Aquesta operació la realitzem en la GPU, bé mitjançant l'ús de biblioteques o mitjançant la creació de funcions que aprofiten l'estructura de la matriu. En cas d'autovalors a l'interior de l'espectre, l'expansió requereix resoldre sistemes d'equacions lineals. En aquesta tesi implementem diversos algorismes per a la resolució de sistemes d'equacions lineals per al cas específic de matrius amb estructura tridiagonal a blocs, que s'executen en GPU. En el càlcul de les funcions de matrius hem de diferenciar entre l'aplicació directa d'una funció sobre una matriu, f(A), i l'aplicació de l'acció d'una funció de matriu sobre un vector, f(A)b. El primer cas implica un càlcul dens que limita la grandària del problema. El segon permet treballar amb matrius disperses grans, i per a resoldre-ho també fem ús de mètodes de Krylov. L'expansió del subespai es fa mitjançant multiplicacions matriu-vector, i fem ús de GPUs de la mateixa forma que en resoldre autovalors. En aquest cas el problema projectat comença sent de grandària m, però s'incrementa en m en cada reinici del mètode. La resolució del problema projectat es fa aplicant una funció de matriu de forma directa. Nosaltres hem implementat diversos algorismes per a calcular les funcions de matrius arrel quadrada i exponencial, en les quals l'ús de GPUs permet accelerar el càlcul.Lamas Daviña, A. (2018). Dense and sparse parallel linear algebra algorithms on graphics processing units [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/112425TESI

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8

    Preconditioned fast solvers for large linear systems with specific sparse and/or Toeplitz-like structures and applications

    Get PDF
    In this thesis, the design of the preconditioners we propose starts from applications instead of treating the problem in a completely general way. The reason is that not all types of linear systems can be addressed with the same tools. In this sense, the techniques for designing efficient iterative solvers depends mostly on properties inherited from the continuous problem, that has originated the discretized sequence of matrices. Classical examples are locality, isotropy in the PDE context, whose discrete counterparts are sparsity and matrices constant along the diagonals, respectively. Therefore, it is often important to take into account the properties of the originating continuous model for obtaining better performances and for providing an accurate convergence analysis. We consider linear systems that arise in the solution of both linear and nonlinear partial differential equation of both integer and fractional type. For the latter case, an introduction to both the theory and the numerical treatment is given. All the algorithms and the strategies presented in this thesis are developed having in mind their parallel implementation. In particular, we consider the processor-co-processor framework, in which the main part of the computation is performed on a Graphics Processing Unit (GPU) accelerator. In Part I we introduce our proposal for sparse approximate inverse preconditioners for either the solution of time-dependent Partial Differential Equations (PDEs), Chapter 3, and Fractional Differential Equations (FDEs), containing both classical and fractional terms, Chapter 5. More precisely, we propose a new technique for updating preconditioners for dealing with sequences of linear systems for PDEs and FDEs, that can be used also to compute matrix functions of large matrices via quadrature formula in Chapter 4 and for optimal control of FDEs in Chapter 6. At last, in Part II, we consider structured preconditioners for quasi-Toeplitz systems. The focus is towards the numerical treatment of discretized convection-diffusion equations in Chapter 7 and on the solution of FDEs with linear multistep formula in boundary value form in Chapter 8
    corecore