282,610 research outputs found

    Parallel iteration schemes for implicit ODEIVP methods

    Get PDF

    Monte Carlo methods in PageRank computation: When one iteration is sufficient

    Get PDF
    PageRank is one of the principle criteria according to which Google ranks Web pages. PageRank can be interpreted as a frequency of visiting a Web page by a random surfer and thus it reflects the popularity of a Web page. Google computes the PageRank using the power iteration method which requires about one week of intensive computations. In the present work we propose and analyze Monte Carlo type methods for the PageRank computation. There are several advantages of the probabilistic Monte Carlo methods over the deterministic power iteration method: Monte Carlo methods provide good estimation of the PageRank for relatively important pages already after one iteration; Monte Carlo methods have natural parallel implementation; and finally, Monte Carlo methods allow to perform continuous update of the PageRank as the structure of the Web changes

    Computational aspects of helicopter trim analysis and damping levels from Floquet theory

    Get PDF
    Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated

    Explicit symmetric Runge-Kutta-Nyström methods for parallel computers

    Get PDF
    AbstractIn this paper, we are concerned with parallel predictor-corrector (PC) iteration of Runge-Kutta-Nyström (RKN) methods in P(EC)mE mode for integrating initial value problems for the special second-order equation y″(t) = f(y(t)). We consider symmetric Runge-Kutta-Nyström (SRKN) corrector methods based on direct collocation techniques which optimize the rate of convergence of the PC iteration process. The resulting PISRKN methods (parallel iterated SRKN methods) are shown to be much more efficient when they are compared to the PC iteration process applied to the Gauss-Legendre RKN correctors

    Convergence Rates with Inexact Non-expansive Operators

    Full text link
    In this paper, we present a convergence rate analysis for the inexact Krasnosel'skii-Mann iteration built from nonexpansive operators. Our results include two main parts: we first establish global pointwise and ergodic iteration-complexity bounds, and then, under a metric subregularity assumption, we establish local linear convergence for the distance of the iterates to the set of fixed points. The obtained iteration-complexity result can be applied to analyze the convergence rate of various monotone operator splitting methods in the literature, including the Forward-Backward, the Generalized Forward-Backward, Douglas-Rachford, alternating direction method of multipliers (ADMM) and Primal-Dual splitting methods. For these methods, we also develop easily verifiable termination criteria for finding an approximate solution, which can be seen as a generalization of the termination criterion for the classical gradient descent method. We finally develop a parallel analysis for the non-stationary Krasnosel'skii-Mann iteration. The usefulness of our results is illustrated by applying them to a large class of structured monotone inclusion and convex optimization problems. Experiments on some large scale inverse problems in signal and image processing problems are shown.Comment: This is an extended version of the work presented in http://arxiv.org/abs/1310.6636, and is accepted by the Mathematical Programmin

    Parallel Self-Consistent-Field Calculations via Chebyshev-Filtered Subspace Acceleration

    Full text link
    Solving the Kohn-Sham eigenvalue problem constitutes the most computationally expensive part in self-consistent density functional theory (DFT) calculations. In a previous paper, we have proposed a nonlinear Chebyshev-filtered subspace iteration method, which avoids computing explicit eigenvectors except at the first SCF iteration. The method may be viewed as an approach to solve the original nonlinear Kohn-Sham equation by a nonlinear subspace iteration technique, without emphasizing the intermediate linearized Kohn-Sham eigenvalue problem. It reaches self-consistency within a similar number of SCF iterations as eigensolver-based approaches. However, replacing the standard diagonalization at each SCF iteration by a Chebyshev subspace filtering step results in a significant speedup over methods based on standard diagonalization. Here, we discuss an approach for implementing this method in multi-processor, parallel environment. Numerical results are presented to show that the method enables to perform a class of highly challenging DFT calculations that were not feasible before
    • …
    corecore