8 research outputs found

    On the closest stable/unstable nonnegative matrix and related stability radii

    Full text link
    We consider the problem of computing the closest stable/unstable non-negative matrix to a given real matrix. This problem is important in the study of linear dynamical systems, numerical methods, etc. The distance between matrices is measured in the Frobenius norm. The problem is addressed for two types of stability: the Schur stability (the matrix is stable if its spectral radius is smaller than one) and the Hurwitz stability (the matrix is stable if its spectral abscissa is negative). We show that the closest unstable matrix can always be explicitly found. For the closest stable matrix, we present an iterative algorithm which converges to a local minimum with a linear rate. It is shown that the total number of local minima can be exponential in the dimension. Numerical results and the complexity estimates are presented

    On approximating the nearest \Omega-stable matrix

    Full text link
    In this paper, we consider the problem of approximating a given matrix with a matrix whose eigenvalues lie in some specific region \Omega, within the complex plane. More precisely, we consider three types of regions and their intersections: conic sectors, vertical strips and disks. We refer to this problem as the nearest \Omega-stable matrix problem. This includes as special cases the stable matrices for continuous and discrete time linear time-invariant systems. In order to achieve this goal, we parametrize this problem using dissipative Hamiltonian matrices and linear matrix inequalities. This leads to a reformulation of the problem with a convex feasible set. By applying a block coordinate descent method on this reformulation, we are able to compute solutions to the approximation problem, which is illustrated on some examples.Comment: 14 pages, 3 figure

    Approximating the nearest stable discrete-time system

    Full text link
    In this paper, we consider the problem of stabilizing discrete-time linear systems by computing a nearby stable matrix to an unstable one. To do so, we provide a new characterization for the set of stable matrices. We show that a matrix AA is stable if and only if it can be written as A=S−1UBSA=S^{-1}UBS, where SS is positive definite, UU is orthogonal, and BB is a positive semidefinite contraction (that is, the singular values of BB are less or equal to 1). This characterization results in an equivalent non-convex optimization problem with a feasible set on which it is easy to project. We propose a very efficient fast projected gradient method to tackle the problem in variables (S,U,B)(S,U,B) and generate locally optimal solutions. We show the effectiveness of the proposed method compared to other approaches.Comment: 15 pages, new title, accepted in LA

    A note on approximating the nearest stable discrete-time descriptor system with fixed rank

    Full text link
    Consider a discrete-time linear time-invariant descriptor system Ex(k+1)=Ax(k)Ex(k+1)=Ax(k) for k∈Z+k \in \mathbb Z_{+}. In this paper, we tackle for the first time the problem of stabilizing such systems by computing a nearby regular index one stable system E^x(k+1)=A^x(k)\hat E x(k+1)= \hat A x(k) with rank(E^)=r\text{rank}(\hat E)=r. We reformulate this highly nonconvex problem into an equivalent optimization problem with a relatively simple feasible set onto which it is easy to project. This allows us to employ a block coordinate descent method to obtain a nearby regular index one stable system. We illustrate the effectiveness of the algorithm on several examples.Comment: 10 pages, 3 tables, 1 figur

    Maximal acyclic subgraphs and closest stable matrices

    Full text link
    We develop a matrix approach to the Maximal Acyclic Subgraph (MAS) problem by reducing it to finding the closest nilpotent matrix to the matrix of the graph. Using recent results on the closest Schur stable systems and on minimising the spectral radius over special sets of non-negative matrices we obtain an algorithm for finding an approximate solution of MAS. Numerical results for graphs from 50 to 1500 vertices demonstrate its fast convergence and give the rate of approximation in most cases larger than 0.6. The same method gives the precise solution for the following weakened version of MAS: and the minimal rr such that the graph can be made acyclic by cutting at most rr incoming edges from each vertex. Several modifications, when each vertex is assigned with its own maximal number rir_i of cut edges, when some of edges are "untouchable", are also considered. Some applications are discussed

    Stabilising the Metzler matrices with applications to dynamical systems

    Full text link
    Metzler matrices play a crucial role in positive linear dynamical systems. Finding the closest stable Metzler matrix to an unstable one (and vice versa) is an important issue with many applications. The stability considered here is in the sense of Hurwitz, and the distance between matrices is measured in l∞, l1l_\infty,\ l_1, and in the max norms. We provide either explicit solutions or efficient algorithms for obtaining the closest (un)stable matrix. The procedure for finding the closest stable Metzler matrix is based on the recently introduced selective greedy spectral method for optimizing the Perron eigenvalue. Originally intended for non-negative matrices, here is generalized to Metzler matrices. The efficiency of the new algorithms is demonstrated in examples and by numerical experiments in the dimension of up to 2000. Applications to dynamical systems, linear switching systems, and sign-matrices are considered.Comment: 38 page

    The greedy strategy in optimizing the Perron eigenvalue

    Full text link
    We address the problems of minimizing and of maximizing the spectral radius overa compact family of non-negative matrices. Those problems being hard in generalcan be efficiently solved for some special families. We consider the so-called prod-uct families, where each matrix is composed of rows chosen independently from givensets. A recently introduced greedy method works very fast. However, it is applicablemostly for strictly positive matrices. For sparse matrices, it often diverges and gives awrong answer. We present the "selective greedy method" thatworks equally well forall non-negative product families, including sparse ones.For this method, we provea quadratic rate of convergence and demonstrate its efficiency in numerical examples.The numerical examples are realised for two cases: finite uncertainty sets and poly-hedral uncertainty sets given by systems of linear inequalities. In dimensions up to 2000, the matrices with minimal/maximal spectral radii in product families are foundwithin a few iterations. Applications to dynamical systemsand to the graph theoryare considere

    Nearest Ω\Omega-stable matrix via Riemannian optimization

    Full text link
    We study the problem of finding the nearest Ω\Omega-stable matrix to a certain matrix AA, i.e., the nearest matrix with all its eigenvalues in a prescribed closed set Ω\Omega. Distances are measured in the Frobenius norm. An important special case is finding the nearest Hurwitz or Schur stable matrix, which has applications in systems theory. We describe a reformulation of the task as an optimization problem on the Riemannian manifold of orthogonal (or unitary) matrices. The problem can then be solved using standard methods from the theory of Riemannian optimization. The resulting algorithm is remarkably fast on small-scale and medium-scale matrices, and returns directly a Schur factorization of the minimizer, sidestepping the numerical difficulties associated with eigenvalues with high multiplicity
    corecore