5 research outputs found

    Riemannian Gradient Algorithm for the Numerical Solution of Linear Matrix Equations

    Get PDF
    A Riemannian gradient algorithm based on geometric structures of a manifold consisting of all positive definite matrices is proposed to calculate the numerical solution of the linear matrix equation Q=X+∑i=1mAiTXAi. In this algorithm, the geodesic distance on the curved Riemannian manifold is taken as an objective function and the geodesic curve is treated as the convergence path. Also the optimal variable step sizes corresponding to the minimum value of the objective function are provided in order to improve the convergence speed. Furthermore, the convergence speed of the Riemannian gradient algorithm is compared with that of the traditional conjugate gradient method in two simulation examples. It is found that the convergence speed of the provided algorithm is faster than that of the conjugate gradient method

    A Note on the ⊤

    Get PDF
    This note is concerned with the linear matrix equation X=AX⊤B + C, where the operator (·)⊤ denotes the transpose (⊤) of a matrix. The first part of this paper sets forth the necessary and sufficient conditions for the unique solvability of the solution X. The second part of this paper aims to provide a comprehensive treatment of the relationship between the theory of the generalized eigenvalue problem and the theory of the linear matrix equation. The final part of this paper starts with a brief review of numerical methods for solving the linear matrix equation. In relation to the computed methods, knowledge of the residual is discussed. An expression related to the backward error of an approximate solution is obtained; it shows that a small backward error implies a small residual. Just like the discussion of linear matrix equations, perturbation bounds for solving the linear matrix equation are also proposed in this work
    corecore