206,460 research outputs found

    Finite iterative algorithms for solving generalized coupled Sylvester systems – Part I: One-sided and generalized coupled Sylvester matrix equations over generalized reflexive solutions

    Get PDF
    AbstractThe generalized coupled Sylvester systems play a fundamental role in wide applications in several areas, such as stability theory, control theory, perturbation analysis, and some other fields of pure and applied mathematics. The iterative method is an important way to solve the generalized coupled Sylvester systems. In this two-part article, finite iterative methods are proposed for solving one-sided (or two-sided) and generalized coupled Sylvester matrix equations and the corresponding optimal approximation problem over generalized reflexive solutions (or reflexive solutions). In part I, an iterative algorithm is constructed to solve one-sided and coupled Sylvester matrix equations (AY−ZB,CY−ZD)=(E,F) over generalized reflexive matrices Y and Z. When the matrix equations are consistent, for any initial generalized reflexive matrix pair [Y1,Z1], the generalized reflexive solutions can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors, and the least Frobenius norm generalized reflexive solution pair can be obtained by choosing a special kind of initial matrix pair. The unique optimal approximation generalized reflexive solution pair [Y^,Z^] to a given matrix pair [Y0,Z0] in Frobenius norm can be derived by finding the least-norm generalized reflexive solution pair [Y∼∗,Z∼∗] of two new corresponding generalized coupled Sylvester matrix equations (AY∼-Z∼B,CY∼-Z∼D)=(E∼,F∼), where E∼=E-AY0+Z0B,F∼=F-CY0+Z0D. Several numerical examples are given to show the effectiveness of the presented iterative algorithm

    Solution of polynomial Lyapunov and Sylvester equations

    No full text
    A two-variable polynomial approach to solve the one-variable polynomial Lyapunov and Sylvester equations is proposed. Lifting the problem from the one-variable to the two-variable context gives rise to associated lifted equations which live on finite-dimensional vector spaces. This allows for the design of an iterative solution method which is inspired by the method of Faddeev for the computation of matrix resolvents. The resulting algorithms are especially suitable for applications requiring symbolic or exact computation

    Preconditioning of Improved and ``Perfect'' Fermion Actions

    Get PDF
    We construct a locally-lexicographic SSOR preconditioner to accelerate the parallel iterative solution of linear systems of equations for two improved discretizations of lattice fermions: the Sheikholeslami-Wohlert scheme where a non-constant block-diagonal term is added to the Wilson fermion matrix and renormalization group improved actions which incorporate couplings beyond nearest neighbors of the lattice fermion fields. In case (i) we find the block llssor-scheme to be more effective by a factor about 2 than odd-even preconditioned solvers in terms of convergence rates, at beta=6.0. For type (ii) actions, we show that our preconditioner accelerates the iterative solution of a linear system of hypercube fermions by a factor of 3 to 4.Comment: 27 pages, Latex, 17 Figures include

    Development of iterative techniques for the solution of unsteady compressible viscous flows

    Get PDF
    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently

    An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    Get PDF
    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil

    Numerical iterative methods for nonlinear problems.

    Get PDF
    The primary focus of research in this thesis is to address the construction of iterative methods for nonlinear problems coming from different disciplines. The present manuscript sheds light on the development of iterative schemes for scalar nonlinear equations, for computing the generalized inverse of a matrix, for general classes of systems of nonlinear equations and specific systems of nonlinear equations associated with ordinary and partial differential equations. Our treatment of the considered iterative schemes consists of two parts: in the first called the ’construction part’ we define the solution method; in the second part we establish the proof of local convergence and we derive convergence-order, by using symbolic algebra tools. The quantitative measure in terms of floating-point operations and the quality of the computed solution, when real nonlinear problems are considered, provide the efficiency comparison among the proposed and the existing iterative schemes. In the case of systems of nonlinear equations, the multi-step extensions are formed in such a way that very economical iterative methods are provided, from a computational viewpoint. Especially in the multi-step versions of an iterative method for systems of nonlinear equations, the Jacobians inverses are avoided which make the iterative process computationally very fast. When considering special systems of nonlinear equations associated with ordinary and partial differential equations, we can use higher-order Frechet derivatives thanks to the special type of nonlinearity: from a computational viewpoint such an approach has to be avoided in the case of general systems of nonlinear equations due to the high computational cost. Aside from nonlinear equations, an efficient matrix iteration method is developed and implemented for the calculation of weighted Moore-Penrose inverse. Finally, a variety of nonlinear problems have been numerically tested in order to show the correctness and the computational efficiency of our developed iterative algorithms
    corecore