7 research outputs found

    Matrična funkcija predznaka

    Get PDF
    Dobro poznate funkcije (poput funkcije predznaka, korijena i logaritma) ponekad se, osim na brojevima, primjenjuju i na matricama. U članku dajemo definicije takvih funkcija, te promatramo njihova osnovna svojstva. Zatim se bavimo matričnom funkcijom predznaka, koja je relativno jednostavna, ali i dovoljno složena da na njoj pokažemo kako se s takvim funkcijama radi. Na kraju opisujemo dva algoritma za računanje vrijednosti opisane funkcije koje zainteresirani čitatelj može prilagoditi i za računanje druge funkcije

    Theory and algorithms for matrix problems with positive semidefinite constraints

    No full text
    This thesis presents new theoretical results and algorithms for two matrix problems with positive semidefinite constraints: it adds to the well-established nearest correlation matrix problem, and introduces a class of semidefinite Lagrangian subspaces. First, we propose shrinking, a method for restoring positive semidefiniteness of an indefinite matrix M0M_0 that computes the optimal parameter \a_* in a convex combination of M0M_0 and a chosen positive semidefinite target matrix. We describe three algorithms for computing \a_*, and then focus on the case of keeping fixed a positive semidefinite leading principal submatrix of an indefinite approximation of a correlation matrix, showing how the structure can be exploited to reduce the cost of two algorithms. We describe how weights can be used to construct a natural choice of the target matrix and that they can be incorporated without any change to computational methods, which is in contrast to the nearest correlation matrix problem. Numerical experiments show that shrinking can be at least an order of magnitude faster than computing the nearest correlation matrix and so is preferable in time-critical applications. Second, we focus on estimating the distance in the Frobenius norm of a symmetric matrix AA to its nearest correlation matrix \Ncm(A) without first computing the latter. The goal is to enable a user to identify an invalid correlation matrix relatively cheaply and to decide whether to revisit its construction or to compute a replacement. We present a few currently available lower and upper bounds for \dcorr(A) = \normF{A - \Ncm(A)} and derive several new upper bounds, discuss the computational cost of all the bounds, and test their accuracy on a collection of invalid correlation matrices. The experiments show that several of our bounds are well suited to gauging the correct order of magnitude of \dcorr(A), which is perfectly satisfactory for practical applications. Third, we show how Anderson acceleration can be used to speed up the convergence of the alternating projections method for computing the nearest correlation matrix, and that the acceleration remains effective when it is applied to the variants of the nearest correlation matrix problem in which specified elements are fixed or a lower bound is imposed on the smallest eigenvalue. This is particularly significant for the nearest correlation matrix problem with fixed elements because no Newton method with guaranteed convergence is available for it. Moreover, alternating projections is a general method for finding a point in the intersection of several sets and this appears to be the first demonstration that these methods can benefit from Anderson acceleration. Finally, we introduce semidefinite Lagrangian subspaces, describe their connection to the unique positive semidefinite solution of an algebraic Riccati equation, and show that these subspaces can be represented by a subset IāŠ†{1,2,ā€¦,n}\mathcal{I} \subseteq \{1,2,\dots, n\} and a Hermitian matrix XāˆˆCnƗnX\in\mathbb{C}^{n\times n} that is a generalization of a quasidefinite matrix. We further obtain a semidefiniteness-preserving version of an optimization algorithm introduced by Mehrmann and Poloni [\textit{SIAM J.\ Matrix Anal.\ Appl.}, 33(2012), pp.\ 780--805] to compute a pair (\mathcal{I}_{\opt},X_{\opt}) with M = \max_{i,j} \abs{(X_{\opt})_{ij}} as small as possible, which improves numerical stability in several contexts

    Principal pivot transforms of quasidefinite matrices and semidefinite Lagrangian subspaces

    Get PDF
    Lagrangian subspaces are linear subspaces that appear naturally in control theory applications, and especially in the context of algebraic Riccati equations. We introduce a class of \emph{semidefinite} Lagrangian subspaces and show that these subspaces can be represented by a subset IāŠ†{1,2,ā€¦,n}\mathcal{I} \subseteq \{1,2,\dots, n\} and a Hermitian matrix XāˆˆCnƗnX\in\mathbb{C}^{n\times n} with the property that the submatrix XIIX_{\mathcal{I}\mathcal{I}} is negative semidefinite and the submatrix XIcIcX_{\mathcal{I}^c\mathcal{I}^c} is positive semidefinite. A matrix XX with these definiteness properties is called I\mathcal{I}-semidefinite and it is a generalization of a quasidefinite matrix. Under mild hypotheses which hold true in most applications, the Lagrangian subspace associated to the stabilizing solution of an algebraic Riccati equation is semidefinite, and in addition we show that there is a bijection between Hamiltonian and symplectic pencils and semidefinite Lagrangian subspaces; hence this structure is ubiquitous in control theory. The (symmetric) principal pivot transform (PPT) is a map used by Mehrmann and Poloni [\textit{SIAM J.\ Matrix Anal.\ Appl.}, 33(2012), pp.\ 780--805] to convert between two different pairs (I,X)(\mathcal{I},X) and (J,Xā€²)(\mathcal{J},X') representing the same Lagrangian subspace. For a semidefinite Lagrangian subspace, we prove that the symmetric PPT of an I\mathcal{I}-semidefinite matrix XX is a J\mathcal{J}-semidefinite matrix Xā€²X', and we derive an implementation of the transformation Xā†¦Xā€²X \mapsto X' that both makes use of the definiteness properties of XX and guarantees the definiteness of the submatrices of Xā€²X' in finite arithmetic. We use the resulting formulas to obtain a semidefiniteness-preserving version of an optimization algorithm introduced by Mehrmann and Poloni to compute a pair (\mathcal{I}_{\opt},X_{\opt}) with M = \max_{i,j} \abs{(X_{\opt})_{ij}} as small as possible. Using semidefiniteness allows one to obtain a stronger inequality on MM with respect to the general case

    Bounds for the Distance to the Nearest Correlation Matrix

    No full text
    In a wide range of practical problems correlation matrices are formed in such a way that, while symmetry and a unit diagonal are assured, they may lack semidefiniteness. We derive a variety of new upper bounds for the distance from an arbitrary symmetric matrix to the nearest correlation matrix. The bounds are of two main classes: those based on the eigensystem and those based on a modified Cholesky factorization. Bounds from both classes have a computational cost of O(n3)O(n^3) flops for a matrix of order nn but are much less expensive to evaluate than the nearest correlation matrix itself. For unit diagonal AA with āˆ£aijāˆ£ā‰¤1|a_{ij}|\le 1 for all iā‰ ji\ne j the eigensystem bounds are shown to overestimate the distance by a factor at most 1+nn1+n\sqrt{n}. We show that for a collection of matrices from the literature and from practical applications the eigensystem-based bounds are often good order of magnitude estimates of the actual distance; indeed the best upper bound is never more than a factor 55 larger than a related lower bound. The modified Cholesky bounds are less sharp but also less expensive, and they provide an efficient way to test for definiteness of the putative correlation matrix. Both classes of bounds enable a user to identify an invalid correlation matrix relatively cheaply and to decide whether to revisit its construction or to compute a replacement, such as the nearest correlation matrix

    Restoring Definiteness via Shrinking, with an Application to Correlation Matrices with a Fixed Block

    No full text
    Indefinite estimates of positive semidefinite matrices arise in many data analysis applications involving covariance matrices and correlation matrices. We develop a method for restoring positive semidefiniteness of an indefinite estimate based on the process of shrinking, which finds a convex linear combination S(Ī±)=Ī±M1+(1āˆ’Ī±)M0S(\alpha) = \alpha M_1 + (1-\alpha)M_0 of the original matrix M0M_0 and a target positive semidefinite matrix M1M_1. We describe three \alg s for computing the optimal shrinking parameter \alpha_* = \min \{\alpha \in [0,1] : \mbox{S(\alpha) is positive semidefinite}\}. One algorithm is based on the bisection method, with the use of Cholesky factorization to test definiteness, a second employs Newton's method, and a third finds the smallest eigenvalue of a symmetric definite generalized eigenvalue problem. We show that weights that reflect confidence in the individual entries of M0M_0 can be used to construct a natural choice of the target matrix M1M_1. We treat in detail a problem variant in which a positive semidefinite leading principal submatrix of M0M_0 remains fixed, showing how the fixed block can be exploited to reduce the cost of the bisection and generalized eigenvalue methods. Numerical experiments show that when applied to estimates of correlation matrices shrinking can be at least an order of magnitude faster than computing the nearest correlation matrix
    corecore