7 research outputs found

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    Local Convergence of Proximal Splitting Methods for Rank Constrained Problems

    Full text link
    We analyze the local convergence of proximal splitting algorithms to solve optimization problems that are convex besides a rank constraint. For this, we show conditions under which the proximal operator of a function involving the rank constraint is locally identical to the proximal operator of its convex envelope, hence implying local convergence. The conditions imply that the non-convex algorithms locally converge to a solution whenever a convex relaxation involving the convex envelope can be expected to solve the non-convex problem.Comment: To be presented at the 56th IEEE Conference on Decision and Control, Melbourne, Dec 201

    On second-order cone positive systems

    Full text link
    Internal positivity offers a computationally cheap certificate for external (input-output) positivity of a linear time-invariant system. However, the drawback with this certificate lies in its realization dependency. Firstly, computing such a realization requires to find a polyhedral cone with a potentially high number of extremal generators that lifts the dimension of the state-space representation, significantly. Secondly, not all externally positive systems posses an internally positive realization. Thirdly, in many typical applications such as controller design, system identification and model order reduction, internal positivity is not preserved. To overcome these drawbacks, we present a tractable sufficient certificate of external positivity based on second-order cones. This certificate does not require any special state-space realization: if it succeeds with a possibly non-minimal realization, then it will do so with any minimal realization. While there exist systems where this certificate is also necessary, we also demonstrate how to construct systems, where both second-order and polyhedral cones as well as other certificates fail. Nonetheless, in contrast to other realization independent certificates, the present one appears to be favourable in terms of applicability and conservatism. Three applications are representatively discussed to underline its potential. We show how the certificate can be used to find externally positive approximations of nearly externally positive systems and demonstrated that this may help to reduce system identification errors. The same algorithm is used then to design state-feedback controllers that provide closed-loop external positivity, a common approach to avoid over- and undershooting of the step response. Lastly, we present modifications to generalized balanced truncation such that external positivity is preserved where our certificate applies

    Low-rank inducing norms with optimality interpretations <sup>∗</sup>

    No full text
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning, and image analysis. Since the rank constraint is nonconvex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which include the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semidefinite programs. Moreover, these norms have cheaply computable proximal mappings, which make it possible to also solve problems of large size using first-order methods
    corecore