721 research outputs found

    GMRES-Accelerated ADMM for Quadratic Objectives

    Full text link
    We consider the sequence acceleration problem for the alternating direction method-of-multipliers (ADMM) applied to a class of equality-constrained problems with strongly convex quadratic objectives, which frequently arise as the Newton subproblem of interior-point methods. Within this context, the ADMM update equations are linear, the iterates are confined within a Krylov subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its ability to accelerate convergence. The basic ADMM method solves a Îș\kappa-conditioned problem in O(Îș)O(\sqrt{\kappa}) iterations. We give theoretical justification and numerical evidence that the GMRES-accelerated variant consistently solves the same problem in O(Îș1/4)O(\kappa^{1/4}) iterations for an order-of-magnitude reduction in iterations, despite a worst-case bound of O(Îș)O(\sqrt{\kappa}) iterations. The method is shown to be competitive against standard preconditioned Krylov subspace methods for saddle-point problems. The method is embedded within SeDuMi, a popular open-source solver for conic optimization written in MATLAB, and used to solve many large-scale semidefinite programs with error that decreases like O(1/k2)O(1/k^{2}), instead of O(1/k)O(1/k), where kk is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on Optimization (SIOPT

    Fast matrix inversion based on Chebyshev acceleration for linear detection in massive MIMO systems

    Get PDF
    MASSIVE5G (SAICT‐45‐2017‐02)To circumvent the prohibitive complexity of linear minimum mean square error detection in a massive multiple-input multiple-output system, several iterative methods have been proposed. However, they can still be too complex and/or lead to non-negligible performance degradation. In this letter, a Chebyshev acceleration technique is proposed to overcome the limitations of iterative methods, accelerating the convergence rates and enhancing the performance. The Chebyshev acceleration method employs a new vector combination, which combines the spectral radius of the iteration matrix with the receiver signal, and also the optimal parameters of Chebyshev acceleration have also been defined. A detector based on iterative algorithms requires pre-processing and initialisation, which enhance the convergence, performance, and complexity. To influence the initialisation, the stair matrix has been proposed as the first start of iterative methods. The performance results show that the proposed technique outperforms state-of-the-art methods in terms of error rate performance, while significantly reducing the computational complexity.publishersversionpublishe

    Pricing Options under Heston’s Stochastic Volatility Model via Accelerated Explicit Finite Differencing Methods

    Get PDF
    We present an acceleration technique, effective for explicit finite difference schemes describing diffusive processes with nearly symmetric operators, called Super-Time-Stepping (STS). The technique is applied to the two-factor problem of option pricing under stochastic volatility. It is shown to significantly reduce the severity of the stability constraint known as the Courant-Friedrichs-Lewy condition whilst retaining the simplicity of the chosen underlying explicit method. For European and American put options under Heston’s stochastic volatility model we demonstrate degrees of acceleration over standard explicit methods sufficient to achieve comparable, or superior, efficiencies to a benchmark implicit scheme. We conclude that STS is a powerful tool for the numerical pricing of options and propose them as the method-of-choice for exotic financial instruments in two and multi-factor models.

    Preconditioned Minimal Residual Methods for Chebyshev Spectral Caluclations

    Get PDF
    The problem of preconditioning the pseudospectral Chebyshev approximation of an elliptic operator is considered. The numerical sensitiveness to variations of the coefficients of the operator are investigated for two classes of preconditioning matrices: one arising from finite differences, the other from finite elements. The preconditioned system is solved by a conjugate gradient type method, and by a DuFort-Frankel method with dynamical parameters. The methods are compared on some test problems with the Richardson method and with the minimal residual Richardson method

    A comparison of numerical splitting-based methods for Markovian dependability and performability models

    Get PDF
    Iterative numerical methods are an important ingredient for the solution of continuous time Markov dependability models of fault-tolerant systems. In this paper we make a numerical comparison of several splitting-based iterative methods. We consider the computation of steady-state reward rate on rewarded models. This measure requires the solution of a singular linear system. We consider two classes of models. The first class includes failure/repair models. The second class is more general and includes the modeling of periodic preventive test of spare components to reduce the probability of latent failures in inactive components. The periodic preventive test is approximated by an Erlang distribution with enough number of stages. We show that for each class of model there is a splitting-based method which is significantly more efficient than the other methods.Postprint (published version

    A Parallel Solver for Graph Laplacians

    Full text link
    Problems from graph drawing, spectral clustering, network flow and graph partitioning can all be expressed in terms of graph Laplacian matrices. There are a variety of practical approaches to solving these problems in serial. However, as problem sizes increase and single core speeds stagnate, parallelism is essential to solve such problems quickly. We present an unsmoothed aggregation multigrid method for solving graph Laplacians in a distributed memory setting. We introduce new parallel aggregation and low degree elimination algorithms targeted specifically at irregular degree graphs. These algorithms are expressed in terms of sparse matrix-vector products using generalized sum and product operations. This formulation is amenable to linear algebra using arbitrary distributions and allows us to operate on a 2D sparse matrix distribution, which is necessary for parallel scalability. Our solver outperforms the natural parallel extension of the current state of the art in an algorithmic comparison. We demonstrate scalability to 576 processes and graphs with up to 1.7 billion edges.Comment: PASC '18, Code: https://github.com/ligmg/ligm

    On the equivalence between the Scheduled Relaxation Jacobi method and Richardson's non-stationary method

    Get PDF
    The Scheduled Relaxation Jacobi (SRJ) method is an extension of the classical Jacobi iterative method to solve linear systems of equations (Au=b) associated with elliptic problems. It inherits its robustness and accelerates its convergence rate computing a set of P relaxation factors that result from a minimization problem. In a typical SRJ scheme, the former set of factors is employed in cycles of M consecutive iterations until a prescribed tolerance is reached. We present the analytic form for the optimal set of relaxation factors for the case in which all of them are strictly different, and find that the resulting algorithm is equivalent to a non-stationary generalized Richardson's method where the matrix of the system of equations is preconditioned multiplying it by D=diag(A). Our method to estimate the weights has the advantage that the explicit computation of the maximum and minimum eigenvalues of the matrix A (or the corresponding iteration matrix of the underlying weighted Jacobi scheme) is replaced by the (much easier) calculation of the maximum and minimum frequencies derived from a von Neumann analysis of the continuous elliptic operator. This set of weights is also the optimal one for the general problem, resulting in the fastest convergence of all possible SRJ schemes for a given grid structure. The amplification factor of the method can be found analytically and allows for the exact estimation of the number of iterations needed to achieve a desired tolerance. We also show that with the set of weights computed for the optimal SRJ scheme for a fixed cycle size it is possible to estimate numerically the optimal value of the parameter ω in the Successive Overrelaxation (SOR) method in some cases. Finally, we demonstrate with practical examples that our method also works very well for Poisson-like problems in which a high-order discretization of the Laplacian operator is employed (e.g., a 9- or 17-points discretization). This is of interest since the former discretizations do not yield consistently ordered A matrices and, hence, the theory of Young cannot be used to predict the optimal value of the SOR parameter. Furthermore, the optimal SRJ schemes deduced here are advantageous over existing SOR implementations for high-order discretizations of the Laplacian operator in as much as they do not need to resort to multi-coloring schemes for their parallel implementation
    • 

    corecore