2,901 research outputs found

    A Non-Monotone Conjugate Subgradient Type Method for Minimization of Convex Functions

    Full text link
    We suggest a conjugate subgradient type method without any line-search for minimization of convex non differentiable functions. Unlike the custom methods of this class, it does not require monotone decrease of the goal function and reduces the implementation cost of each iteration essentially. At the same time, its step-size procedure takes into account behavior of the method along the iteration points. Preliminary results of computational experiments confirm efficiency of the proposed modification.Comment: 11 page

    Non-recursive equivalent of the conjugate gradient method without the need to restart

    Full text link
    A simple alternative to the conjugate gradient(CG) method is presented; this method is developed as a special case of the more general iterated Ritz method (IRM) for solving a system of linear equations. This novel algorithm is not based on conjugacy, i.e. it is not necessary to maintain overall orthogonalities between various vectors from distant steps. This method is more stable than CG, and restarting techniques are not required. As in CG, only one matrix-vector multiplication is required per step with appropriate transformations. The algorithm is easily explained by energy considerations without appealing to the A-orthogonality in n-dimensional space. Finally, relaxation factor and preconditioning-like techniques can be adopted easily.Comment: 9 page

    A new, globally convergent Riemannian conjugate gradient method

    Get PDF
    This article deals with the conjugate gradient method on a Riemannian manifold with interest in global convergence analysis. The existing conjugate gradient algorithms on a manifold endowed with a vector transport need the assumption that the vector transport does not increase the norm of tangent vectors, in order to confirm that generated sequences have a global convergence property. In this article, the notion of a scaled vector transport is introduced to improve the algorithm so that the generated sequences may have a global convergence property under a relaxed assumption. In the proposed algorithm, the transported vector is rescaled in case its norm has increased during the transport. The global convergence is theoretically proved and numerically observed with examples. In fact, numerical experiments show that there exist minimization problems for which the existing algorithm generates divergent sequences, but the proposed algorithm generates convergent sequences.Comment: 22 pages, 8 figure

    Improving Performance of Iterative Methods by Lossy Checkponting

    Get PDF
    Iterative methods are commonly used approaches to solve large, sparse linear systems, which are fundamental operations for many modern scientific simulations. When the large-scale iterative methods are running with a large number of ranks in parallel, they have to checkpoint the dynamic variables periodically in case of unavoidable fail-stop errors, requiring fast I/O systems and large storage space. To this end, significantly reducing the checkpointing overhead is critical to improving the overall performance of iterative methods. Our contribution is fourfold. (1) We propose a novel lossy checkpointing scheme that can significantly improve the checkpointing performance of iterative methods by leveraging lossy compressors. (2) We formulate a lossy checkpointing performance model and derive theoretically an upper bound for the extra number of iterations caused by the distortion of data in lossy checkpoints, in order to guarantee the performance improvement under the lossy checkpointing scheme. (3) We analyze the impact of lossy checkpointing (i.e., extra number of iterations caused by lossy checkpointing files) for multiple types of iterative methods. (4)We evaluate the lossy checkpointing scheme with optimal checkpointing intervals on a high-performance computing environment with 2,048 cores, using a well-known scientific computation package PETSc and a state-of-the-art checkpoint/restart toolkit. Experiments show that our optimized lossy checkpointing scheme can significantly reduce the fault tolerance overhead for iterative methods by 23%~70% compared with traditional checkpointing and 20%~58% compared with lossless-compressed checkpointing, in the presence of system failures.Comment: 14 pages, 10 figures, HPDC'1

    Scalability Analysis of Parallel GMRES Implementations

    Get PDF
    Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics

    Computational experience with improved conjugate gradient methods for unconstrained minimization

    Get PDF
    corecore