10,703 research outputs found

    Performance Evaluation of an Extrapolation Method for Ordinary Differential Equations with Error-free Transformation

    Full text link
    The application of error-free transformation (EFT) is recently being developed to solve ill-conditioned problems. It can reduce the number of arithmetic operations required, compared with multiple precision arithmetic, and also be applied by using functions supported by a well-tuned BLAS library. In this paper, we propose the application of EFT to explicit extrapolation methods to solve initial value problems of ordinary differential equations. Consequently, our implemented routines can be effective for large-sized linear ODE and small-sized nonlinear ODE, especially in the case when harmonic sequence is used

    An efficient and accurate decomposition of the Fermi operator

    Get PDF
    We present a method to compute the Fermi function of the Hamiltonian for a system of independent fermions, based on an exact decomposition of the grand-canonical potential. This scheme does not rely on the localization of the orbitals and is insensitive to ill-conditioned Hamiltonians. It lends itself naturally to linear scaling, as soon as the sparsity of the system's density matrix is exploited. By using a combination of polynomial expansion and Newton-like iterative techniques, an arbitrarily large number of terms can be employed in the expansion, overcoming some of the difficulties encountered in previous papers. Moreover, this hybrid approach allows us to obtain a very favorable scaling of the computational cost with increasing inverse temperature, which makes the method competitive with other Fermi operator expansion techniques. After performing an in-depth theoretical analysis of computational cost and accuracy, we test our approach on the DFT Hamiltonian for the metallic phase of the LiAl alloy.Comment: 8 pages, 7 figure

    Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice

    Full text link
    We introduce a generic scheme for accelerating gradient-based optimization methods in the sense of Nesterov. The approach, called Catalyst, builds upon the inexact accelerated proximal point algorithm for minimizing a convex objective function, and consists of approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. One of the keys to achieve acceleration in theory and in practice is to solve these sub-problems with appropriate accuracy by using the right stopping criterion and the right warm-start strategy. We give practical guidelines to use Catalyst and present a comprehensive analysis of its global complexity. We show that Catalyst applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, MISO/Finito, and their proximal variants. For all of these methods, we establish faster rates using the Catalyst acceleration, for strongly convex and non-strongly convex objectives. We conclude with extensive experiments showing that acceleration is useful in practice, especially for ill-conditioned problems.Comment: link to publisher website: http://jmlr.org/papers/volume18/17-748/17-748.pd
    • …
    corecore