542 research outputs found

    Using GPU to Accelerate Linear Computations in Power System Applications

    Get PDF
    With the development of advanced power system controls, the industrial and research community is becoming more interested in simulating larger interconnected power grids. It is always critical to incorporate advanced computing technologies to accelerate these power system computations. Power flow, one of the most fundamental computations in power system analysis, converts the solution of non-linear systems to that of a set of linear systems via the Newton method or one of its variants. An efficient solution to these linear equations is the key to improving the performance of power flow computation, and hence to accelerating other power system applications based on power flow computation, such as optimal power flow, contingency analysis, etc. This dissertation focuses on the exploration of iterative linear solvers and applicable preconditioners, with graphic processing unit (GPU) implementations to achieve performance improvement on the linear computations in power flow computations. An iterative conjugate gradient solver with Chebyshev preconditioner is studied first, and then the preconditioner is extended to a two-step preconditioner. At last, the conjugate gradient solver and the two-step preconditioner are integrated with MATPOWER to solve the practical fast decoupled load flow (FDPF), and an inexact linear solution method is proposed to further save the runtime of FDPF. Performance improvement is reported by applying these methods and GPU-implementation. The final complete GPU-based FDPF with inexact linear solving can achieve nearly 3x performance improvement over the MATPOWER implementation for a test system with 11,624 buses. A supporting study including a quick estimation of the largest eigenvalue of the linear system which is required by the Chebyshev preconditioner is presented as well. This dissertation demonstrates the potential of using GPU with scalable methods in power flow computation

    A factored sparse approximate inverse preconditioned conjugate gradient solver on graphics processing units

    Get PDF
    Graphics Processing Units (GPUs) exhibit significantly higher peak performance than conventional CPUs. However, in general only highly parallel algorithms can exploit their potential. In this scenario, the iterative solution to sparse linear systems of equations could be carried out quite efficiently on a GPU as it requires only matrix-by-vector products, dot products, and vector updates. However, to be really effective, any iterative solver needs to be properly preconditioned and this represents a major bottleneck for a successful GPU implementation. Due to its inherent parallelism, the factored sparse approximate inverse (FSAI) preconditioner represents an optimal candidate for the conjugate gradient-like solution of sparse linear systems. However, its GPU implementation requires a nontrivial recasting of multiple computational steps. We present our GPU version of the FSAI preconditioner along with a set of results that show how a noticeable speedup with respect to a highly tuned CPU counterpart is obtained

    GPU Accelerated Explicit Time Integration Methods for Electro-Quasistatic Fields

    Full text link
    Electro-quasistatic field problems involving nonlinear materials are commonly discretized in space using finite elements. In this paper, it is proposed to solve the resulting system of ordinary differential equations by an explicit Runge-Kutta-Chebyshev time-integration scheme. This mitigates the need for Newton-Raphson iterations, as they are necessary within fully implicit time integration schemes. However, the electro-quasistatic system of ordinary differential equations has a Laplace-type mass matrix such that parts of the explicit time-integration scheme remain implicit. An iterative solver with constant preconditioner is shown to efficiently solve the resulting multiple right-hand side problem. This approach allows an efficient parallel implementation on a system featuring multiple graphic processing units.Comment: 4 pages, 5 figure

    Mixed Precision Iterative Refinement with Adaptive Precision Sparse Approximate Inverse Preconditioning

    Full text link
    Hardware trends have motivated the development of mixed precision algo-rithms in numerical linear algebra, which aim to decrease runtime while maintaining acceptable accuracy. One recent development is the development of an adaptive precision sparse matrix-vector produce routine, which may be used to accelerate the solution of sparse linear systems by iterative methods. This approach is also applicable to the application of inexact preconditioners, such as sparse approximate inverse preconditioners used in Krylov subspace methods. In this work, we develop an adaptive precision sparse approximate inverse preconditioner and demonstrate its use within a five-precision GMRES-based iterative refinement method. We call this algorithm variant BSPAI-GMRES-IR. We then analyze the conditions for the convergence of BSPAI-GMRES-IR, and determine settings under which BSPAI-GMRES-IR will produce similar backward and forward errors as the existing SPAI-GMRES-IR method, the latter of which does not use adaptive precision in preconditioning. Our numerical experiments show that this approach can potentially lead to a reduction in the cost of storing and applying sparse approximate inverse preconditioners, although a significant reduction in cost may comes at the expense of increasing the number of GMRES iterations required for convergence

    Distributed Kalman filtering compared to Fourier domain preconditioned conjugate gradient for laser guide star tomography on extremely large telescopes

    Get PDF
    This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt. 45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG’s computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A 28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF’s cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF’s sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units

    Lattice QCD based on OpenCL

    Get PDF
    We present an OpenCL-based Lattice QCD application using a heatbath algorithm for the pure gauge case and Wilson fermions in the twisted mass formulation. The implementation is platform independent and can be used on AMD or NVIDIA GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double precision dslash implementation performs at 60 GFLOPS over a wide range of lattice sizes. The hybrid Monte-Carlo presented reaches a speedup of four over the reference code running on a server CPU.Comment: 19 pages, 11 figure
    • …
    corecore