27 research outputs found

    Distributed control of chemical process networks

    Full text link

    Optimality interpretations for atomic norms

    No full text
    Atomic norms occur frequently in data science and engineering problems such as matrix completion, sparse linear regression, system identification and many more. These norms are often used to convexify non-convex optimization problems, which are convex apart from the solution lying in a non-convex set of so-called atoms. For the convex part being a linear constraint, the ability of several atomic norms to solve the original non-convex problem has been analyzed by means of tangent cones. This paper presents an alternative route for this analysis by showing that atomic norm convexifcations always provide an optimal convex relaxation for some related non-convex problems. As a result, we obtain the following benefits: (i) treatment of arbitrary convex constraints, (ii) potentially obtaining solutions to the non-convex problem with a posteriori success certificates, (iii) utilization of additional prior knowledge through the design or learning of the non-convex problem

    Low-rank inducing norms with optimality interpretations <sup>∗</sup>

    No full text
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning, and image analysis. Since the rank constraint is nonconvex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which include the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semidefinite programs. Moreover, these norms have cheaply computable proximal mappings, which make it possible to also solve problems of large size using first-order methods

    Local convergence of proximal splitting methods for rank constrained problems

    No full text
    We analyze the local convergence of proximal splitting algorithms to solve optimization problems that are convex besides a rank constraint. For this, we show conditions under which the proximal operator of a function involving the rank constraint is locally identical to the proximal operator of its convex envelope, hence implying local convergence. The conditions imply that the non-convex algorithms locally converge to a solution whenever a convex relaxation involving the convex envelope can be expected to solve the non-convex problem

    Efficient Proximal Mapping Computation for Unitarily Invariant Low-Rank Inducing Norms

    No full text
    Low-rank inducing unitarily invariant norms have been introduced to convexify problems with low-rank/sparsity constraint. They are the convex envelope of a unitary invariant norm and the indicator function of an upper bounding rank constraint. The most well-known member of this family is the so-called nuclear norm. To solve optimization problems involving such norms with proximal splitting methods, efficient ways of evaluating the proximal mapping of the low-rank inducing norms are needed. This is known for the nuclear norm, but not for most other members of the low-rank inducing family. This work supplies a framework that reduces the proximal mapping evaluation into a nested binary search, in which each iteration requires the solution of a much simpler problem. This simpler problem can often be solved analytically as it is demonstrated for the so-called low-rank inducing Frobenius and spectral norms. Moreover, the framework allows to compute the proximal mapping of compositions of these norms with increasing convex functions and the projections onto their epigraphs. This has the additional advantage that we can also deal with compositions of increasing convex functions and low-rank inducing norms in proximal splitting methods

    Low-Rank Optimization with Convex Constraints

    No full text
    The problem of low-rank approximation with convex constraints, which appears in data analysis, system identification, model order reduction, low-order controller design, and low-complexity modeling is considered. Given a matrix, the objective is to find a low-rank approximation that meets rank and convex constraints while minimizing the distance to the matrix in the squared Frobenius norm. In many situations, this nonconvex problem is convexified by nuclear-norm regularization. However, we will see that the approximations obtained by this method may be far from optimal. In this paper, we propose an alternative convex relaxation that uses the convex envelope of the squared Frobenius norm and the rank constraint. With this approach, easily verifiable conditions are obtained under which the solutions to the convex relaxation and the original nonconvex problem coincide. A semidefinite programming representation of the convex envelope is derived, which allows us to apply this approach to several known problems. Our example on optimal low-rank Hankel approximation/model reduction illustrates that the proposed convex relaxation performs consistently better than nuclear-norm regularization and may outperform balanced truncation

    Distributed Robust Model Predictive Control of Interconnected Polytopic Systems

    No full text
    International audienceA suboptimal approach to distributed robust MPC for uncertain systems consisting of polytopic subsystems with coupled dynamics subject to both state and input constraints is proposed. The robustness is defined in terms of the optimization of a cost function accumulated over the uncertainty and satisfying state constraints for a finite subset of uncertainties. The approach reformulates the original centralized robust MPC problem into a quadratic programming problem, which is solved by distributed iterations of the dual accelerated gradient method. A stopping condition is used that allows the iterations to stop when the desired performance, stability, and feasibility can be guaranteed. This allows for the approach to be used in an embedded robust MPC implementation. The developed method is illustrated on a simulation example of an uncertain system consisting of two interconnected polytopic subsystems
    corecore