209,893 research outputs found

    On fixed-point, Krylov, and 2×22\times 2 block preconditioners for nonsymmetric problems

    Full text link
    The solution of matrices with 2×22\times 2 block structure arises in numerous areas of computational mathematics, such as PDE discretizations based on mixed-finite element methods, constrained optimization problems, or the implicit or steady state treatment of any system of PDEs with multiple dependent variables. Often, these systems are solved iteratively using Krylov methods and some form of block preconditioner. Under the assumption that one diagonal block is inverted exactly, this paper proves a direct equivalence between convergence of 2×22\times2 block preconditioned Krylov or fixed-point iterations to a given tolerance, with convergence of the underlying preconditioned Schur-complement problem. In particular, results indicate that an effective Schur-complement preconditioner is a necessary and sufficient condition for rapid convergence of 2×22\times 2 block-preconditioned GMRES, for arbitrary relative-residual stopping tolerances. A number of corollaries and related results give new insight into block preconditioning, such as the fact that approximate block-LDU or symmetric block-triangular preconditioners offer minimal reduction in iteration over block-triangular preconditioners, despite the additional computational cost. Theoretical results are verified numerically on a nonsymmetric steady linearized Navier-Stokes discretization, which also demonstrate that theory based on the assumption of an exact inverse of one diagonal block extends well to the more practical setting of inexact inverses.Comment: Accepted to SIMA

    Preconditioned WR–LMF-based method for ODE systems

    Get PDF
    AbstractThe waveform relaxation (WR) method was developed as an iterative method for solving large systems of ordinary differential equations (ODEs). In each WR iteration, we are required to solve a system of ODEs. We then introduce the boundary value method (BVM) which is a relatively new method based on the linear multistep formulae to solve ODEs. In particular, we apply the generalized minimal residual method with the Strang-type block-circulant preconditioner for solving linear systems arising from the application of BVMs to each WR iteration. It is demonstrated that these techniques are very effective in speeding up the convergence rate of the resulting iterative processes. Numerical experiments are presented to illustrate the effectiveness of our methods

    BlockDrop: Dynamic Inference Paths in Residual Networks

    Full text link
    Very deep convolutional neural networks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy. Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. In particular, given a pretrained ResNet, we train a policy network in an associative reinforcement learning setting for the dual reward of utilizing a minimal number of blocks while preserving recognition accuracy. We conduct extensive experiments on CIFAR and ImageNet. The results provide strong quantitative and qualitative evidence that these learned policies not only accelerate inference but also encode meaningful visual information. Built upon a ResNet-101 model, our method achieves a speedup of 20\% on average, going as high as 36\% for some images, while maintaining the same 76.4\% top-1 accuracy on ImageNet.Comment: CVPR 201
    • …
    corecore