6,432 research outputs found

    A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    Get PDF
    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures

    Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit

    Get PDF
    Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization. It is widely observed in existing empirical studies that when a restricted Newton step was used (as the debiasing step), the hard-thresholding algorithms tend to meet halting conditions in a significantly low number of iterations and are very efficient. Hence, the thus obtained Newton hard-thresholding algorithms call for stronger theoretical guarantees than for their simple hard-thresholding counterparts. This paper provides a theoretical justification for the use of the restricted Newton step. We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for the sparsity-constrained optimization. Our main result shows that NHTP is quadratically convergent under the standard assumption of restricted strong convexity and smoothness. We also establish its global convergence to a stationary point under a weaker assumption. In the special case of the compressive sensing, NHTP effectively reduces to some of the existing hard-thresholding algorithms with a Newton step. Consequently, our fast convergence result justifies why those algorithms perform better than without the Newton step. The efficiency of NHTP was demonstrated on both synthetic and real data in compressed sensing and sparse logistic regression

    An efficient method for solving the steady Euler equations

    Get PDF
    An efficient numerical procedure for solving a set of nonlinear partial differential equations is given, specifically for the steady Euler equations. Solutions of the equations were obtained by Newton's linearization procedure, commonly used to solve the roots of nonlinear algebraic equations. In application of the same procedure for solving a set of differential equations we give a theorem showing that a quadratic convergence rate can be achieved. While the domain of quadratic convergence depends on the problems studied and is unknown a priori, we show that firstand second-order derivatives of flux vectors determine whether the condition for quadratic convergence is satisfied. The first derivatives enter as an implicit operator for yielding new iterates and the second derivatives indicates smoothness of the flows considered. Consequently flows involving shocks are expected to require larger number of iterations. First-order upwind discretization in conjunction with the Steger-Warming flux-vector splitting is employed on the implicit operator and a diagonal dominant matrix results. However the explicit operator is represented by first- and seond-order upwind differencings, using both Steger-Warming's and van Leer's splittings. We discuss treatment of boundary conditions and solution procedures for solving the resulting block matrix system. With a set of test problems for one- and two-dimensional flows, we show detailed study as to the efficiency, accuracy, and convergence of the present method

    The impact of a natural time change on the convergence of the Crank-Nicolson scheme

    Full text link
    We first analyse the effect of a square root transformation to the time variable on the convergence of the Crank-Nicolson scheme when applied to the solution of the heat equation with Dirac delta function initial conditions. In the original variables, the scheme is known to diverge as the time step is reduced with the ratio of the time step to space step held constant and the value of this ratio controls how fast the divergence occurs. After introducing the square root of time variable we prove that the numerical scheme for the transformed partial differential equation now always converges and that the ratio of the time step to space step controls the order of convergence, quadratic convergence being achieved for this ratio below a critical value. Numerical results indicate that the time change used with an appropriate value of this ratio also results in quadratic convergence for the calculation of the price, delta and gamma for standard European and American options without the need for Rannacher start-up steps

    An implicit function theorem for non-smooth maps between Fr\'echet spaces

    Full text link
    We prove an inverse function theorem of Nash-Moser type for maps between Fr\'echet spaces satisfying tame estimates. In contrast to earlier proofs, we do not use the Newton method, that is, we do not use quadratic convergence to overcome the lack of derivatives. In fact, our theorem holds when the map to be inverted is not C^

    A new modified Newton iteration for computing nonnegative Z-eigenpairs of nonnegative tensors

    Full text link
    We propose a new modification of Newton iteration for finding some nonnegative Z-eigenpairs of a nonnegative tensor. The method has local quadratic convergence to a nonnegative eigenpair of a nonnegative tensor, under the usual assumption guaranteeing the local quadratic convergence of the original Newton iteration
    • …
    corecore