281,426 research outputs found

    Anderson-accelerated convergence of Picard iterations for incompressible Navier-Stokes equations

    Full text link
    We propose, analyze and test Anderson-accelerated Picard iterations for solving the incompressible Navier-Stokes equations (NSE). Anderson acceleration has recently gained interest as a strategy to accelerate linear and nonlinear iterations, based on including an optimization step in each iteration. We extend the Anderson-acceleration theory to the steady NSE setting and prove that the acceleration improves the convergence rate of the Picard iteration based on the success of the underlying optimization problem. The convergence is demonstrated in several numerical tests, with particularly marked improvement in the higher Reynolds number regime. Our tests show it can be an enabling technology in the sense that it can provide convergence when both usual Picard and Newton iterations fail

    An inexact Noda iteration for computing the smallest eigenpair of a large irreducible monotone matrix

    Full text link
    In this paper, we present an inexact Noda iteration with inner-outer iterations for finding the smallest eigenvalue and the associated eigenvector of an irreducible monotone matrix. The proposed inexact Noda iteration contains two main relaxation steps for computing the smallest eigenvalue and the associated eigenvector, respectively. These relaxation steps depend on the relaxation factors, and we analyze how the relaxation factors in the relaxation steps affect the convergence of the outer iterations. By considering two different relaxation factors for solving the inner linear systems involved, we prove that the convergence is globally linear or superlinear, depending on the relaxation factor, and that the relaxation factor also influences the convergence rate. The proposed inexact Noda iterations are structure preserving and maintain the positivity of approximate eigenvectors. Numerical examples are provided to illustrate that the proposed inexact Noda iterations are practical, and they always preserve the positivity of approximate eigenvectors.Comment: 17 pages, 1 figure. arXiv admin note: text overlap with arXiv:1309.392

    The finite steps of convergence of the fast thresholding algorithms with feedbacks

    Full text link
    Iterative algorithms based on thresholding, feedback and null space tuning (NST+HT+FB) for sparse signal recovery are exceedingly effective and fast, particularly for large scale problems. The core algorithm is shown to converge in finitely many steps under a (preconditioned) restricted isometry condition. In this paper, we present a new perspective to analyze the algorithm, which turns out that the efficiency of the algorithm can be further elaborated by an estimate of the number of iterations for the guaranteed convergence. The convergence condition of NST+HT+FB is also improved. Moreover, an adaptive scheme (AdptNST+HT+FB) without the knowledge of the sparsity level is proposed with its convergence guarantee. The number of iterations for the finite step of convergence of the AdptNST+HT+FB scheme is also derived. It is further shown that the number of iterations can be significantly reduced by exploiting the structure of the specific sparse signal or the random measurement matrix

    Strong convergence of modified Ishikawa iterations for nonlinear mappings

    Full text link
    In this paper, we prove a strong convergence theorem of modified Ishikawa iterations for relatively asymptotically nonexpansive mappings in Banach space. Our results extend and improve the recent results by Nakajo, Takahashi, Kim, Xu, Matsushita and some others.Comment: 11 page

    Max-Product for Maximum Weight Matching - Revisited

    Full text link
    We focus on belief propagation for the assignment problem, also known as the maximum weight bipartite matching problem. We provide a constructive proof that the well-known upper bound on the number of iterations (Bayati, Shah, Sharma 2008) is tight up to a factor of four. Furthermore, we investigate the behavior of belief propagation when convergence is not required. We show that the number of iterations required for a sharp approximation consumes a large portion of the convergence time. Finally, we propose an "approximate belief propagation" algorithm for the assignment problem

    Spectral Transformation Algorithms for Computing Unstable Modes of Large Scale Power Systems

    Full text link
    In this paper we describe spectral transformation algorithms for the computation of eigenvalues with positive real part of sparse nonsymmetric matrix pencils (J,L)(J,L), where LL is of the form \pmatrix{M&0\cr 0&0}. For this we define a different extension of M\"obius transforms to pencils that inhibits the effect on iterations of the spurious eigenvalue at infinity. These algorithms use a technique of preconditioning the initial vectors by M\"obius transforms which together with shift-invert iterations accelerate the convergence to the desired eigenvalues. Also, we see that M\"obius transforms can be successfully used in inhibiting the convergence to a known eigenvalue. Moreover, the procedure has a computational cost similar to power or shift-invert iterations with M\"obius transforms: neither is more expensive than the usual shift-invert iterations with pencils. Results from tests with a concrete transient stability model of an interconnected power system whose Jacobian matrix has order 3156 are also reported here.Comment: 19 pages, 1 figur

    On inner iterations of Jacobi-Davidson type methods for large SVD computations

    Full text link
    We make a convergence analysis of the harmonic and refined harmonic extraction versions of Jacobi-Davidson SVD (JDSVD) type methods for computing one or more interior singular triplets of a large matrix AA. At each outer iteration of these methods, a correction equation, i.e., inner linear system, is solved approximately by using iterative methods, which leads to two inexact JDSVD type methods, as opposed to the exact methods where correction equations are solved exactly. Accuracy of inner iterations critically affects the convergence and overall efficiency of the inexact JDSVD methods. A central problem is how accurately the correction equations should be solved so as to ensure that both of the inexact JDSVD methods can mimic their exact counterparts well, that is, they use almost the same outer iterations to achieve the convergence. In this paper, similar to the available results on the JD type methods for large matrix eigenvalue problems, we prove that each inexact JDSVD method behaves like its exact counterpart if all the correction equations are solved with low or modestlow\ or\ modest accuracy during outer iterations. Based on the theory, we propose practical stopping criteria for inner iterations. Numerical experiments confirm our theory and the effectiveness of the inexact algorithms.Comment: 30 pages, 3 figure

    Convergence rate analysis for averaged fixed point iterations in the presence of H\"older regularity

    Full text link
    In this paper, we establish sublinear and linear convergence of fixed point iterations generated by averaged operators in a Hilbert space. Our results are achieved under a bounded H\"older regularity assumption which generalizes the well-known notion of bounded linear regularity. As an application of our results, we provide a convergence rate analysis for Krasnoselskii-Mann iterations, the cyclic projection algorithm, and the Douglas-Rachford feasibility algorithm along with some variants. In the important case in which the underlying sets are convex sets described by convex polynomials in a finite dimensional space, we show that the H\"older regularity properties are automatically satisfied, from which sublinear convergence follows.Comment: 34 pages, 1 figur

    Solving a non-linear model of HIV infection for CD4+T cells by combining Laplace transformation and Homotopy analysis method

    Full text link
    The aim of this paper is to find the approximate solution of HIV infection model of CD4+T cells. For this reason, the homotopy analysis transform method (HATM) is applied. The presented method is combination of traditional homotopy analysis method (HAM) and the Laplace transformation. The convergence of presented method is discussed by preparing a theorem which shows the capabilities of method. The numerical results are shown for different values of iterations. Also, the regions of convergence are demonstrated by plotting several h-curves. Furthermore in order to show the efficiency and accuracy of method, the residual error for different iterations are presented

    A proof that Anderson acceleration improves the convergence rate in linearly converging fixed point methods (but not in those converging quadratically)

    Full text link
    This paper provides the first proof that Anderson acceleration (AA) improves the convergence rate of general fixed point iterations. AA has been used for decades to speed up nonlinear solvers in many applications, however a rigorous mathematical justification of the improved convergence rate has remained lacking. The key ideas of the analysis presented here are relating the difference of consecutive iterates to residuals based on performing the inner-optimization in a Hilbert space setting, and explicitly defining the gain in the optimization stage to be the ratio of improvement over a step of the unaccelerated fixed point iteration. The main result we prove is that AA improves the convergence rate of a fixed point iteration to first order by a factor of the gain at each step. In addition to improving the convergence rate, our results indicate that AA increases the radius of convergence. Lastly, our estimate shows that while the linear convergence rate is improved, additional quadratic terms arise in the estimate, which shows why AA does not typically improve convergence in quadratically converging fixed point iterations. Results of several numerical tests are given which illustrate the theory
    • …
    corecore