39,737 research outputs found

    Low-Rank Inducing Norms with Optimality Interpretations

    Full text link
    Optimization problems with rank constraints appear in many diverse fields such as control, machine learning and image analysis. Since the rank constraint is non-convex, these problems are often approximately solved via convex relaxations. Nuclear norm regularization is the prevailing convexifying technique for dealing with these types of problem. This paper introduces a family of low-rank inducing norms and regularizers which includes the nuclear norm as a special case. A posteriori guarantees on solving an underlying rank constrained optimization problem with these convex relaxations are provided. We evaluate the performance of the low-rank inducing norms on three matrix completion problems. In all examples, the nuclear norm heuristic is outperformed by convex relaxations based on other low-rank inducing norms. For two of the problems there exist low-rank inducing norms that succeed in recovering the partially unknown matrix, while the nuclear norm fails. These low-rank inducing norms are shown to be representable as semi-definite programs. Moreover, these norms have cheaply computable proximal mappings, which makes it possible to also solve problems of large size using first-order methods

    Rank Reduction with Convex Constraints

    Get PDF
    This thesis addresses problems which require low-rank solutions under convex constraints. In particular, the focus lies on model reduction of positive systems, as well as finite dimensional optimization problems that are convex, apart from a low-rank constraint. Traditional model reduction techniques try to minimize the error between the original and the reduced system. Typically, the resulting reduced models, however, no longer fulfill physically meaningful constraints. This thesis considers the problem of model reduction with internal and external positivity constraints. Both problems are solved by means of balanced truncation. While internal positivity is shown to be preserved by a symmetry property; external positivity preservation is accomplished by deriving a modified balancing approach based on ellipsoidal cone invariance.In essence, positivity preserving model reduction attempts to find an infinite dimensional low-rank approximation that preserves nonnegativity, as well as Hankel structure. Due to the non-convexity of the low-rank constraint, this problem is even challenging in a finite dimensional setting. In addition to model reduction, the present work also considers such finite dimensional low-rank optimization problems with convex constraints. These problems frequently appear in applications such as image compression, multivariate linear regression, matrix completion and many more. The main idea of this thesis is to derive the largest convex minorizers of rank-constrained unitarily invariant norms. These minorizers can be used to construct optimal convex relaxations for the original non-convex problem. Unlike other methods such as nuclear norm regularization, this approach benefits from having verifiable a posterior conditions for which a solution to the convex relaxation and the corresponding non-convex problem coincide. It is shown that this applies to various numerical examples of well-known low-rank optimization problems. In particular, the proposed convex relaxation performs significantly better than nuclear norm regularization. Moreover, it can be observed that a careful choice among the proposed convex relaxations may have a tremendous positive impact on matrix completion. Computational tractability of the proposed approach is accomplished in two ways. First, the considered relaxations are shown to be representable by semi-definite programs. Second, it is shown how to compute the proximal mappings, for both, the convex relaxations, as well as the non-convex problem. This makes it possible to apply first order method such as so-called Douglas-Rachford splitting. In addition to the convex case, where global convergence of this algorithm is guaranteed, conditions for local convergence in the non-convex setting are presented. Finally, it is shown that the findings of this thesis also extend to the general class of so-called atomic norms that allow us to cover other non-convex constraints

    Non-Convex Rank Minimization via an Empirical Bayesian Approach

    Full text link
    In many applications that require matrix solutions of minimal rank, the underlying cost function is non-convex leading to an intractable, NP-hard optimization problem. Consequently, the convex nuclear norm is frequently used as a surrogate penalty term for matrix rank. The problem is that in many practical scenarios there is no longer any guarantee that we can correctly estimate generative low-rank matrices of interest, theoretical special cases notwithstanding. Consequently, this paper proposes an alternative empirical Bayesian procedure build upon a variational approximation that, unlike the nuclear norm, retains the same globally minimizing point estimate as the rank function under many useful constraints. However, locally minimizing solutions are largely smoothed away via marginalization, allowing the algorithm to succeed when standard convex relaxations completely fail. While the proposed methodology is generally applicable to a wide range of low-rank applications, we focus our attention on the robust principal component analysis problem (RPCA), which involves estimating an unknown low-rank matrix with unknown sparse corruptions. Theoretical and empirical evidence are presented to show that our method is potentially superior to related MAP-based approaches, for which the convex principle component pursuit (PCP) algorithm (Candes et al., 2011) can be viewed as a special case.Comment: 10 pages, 6 figures, UAI 2012 pape

    Resource allocation for transmit hybrid beamforming in decoupled millimeter wave multiuser-MIMO downlink

    Get PDF
    This paper presents a study on joint radio resource allocation and hybrid precoding in multicarrier massive multiple-input multiple-output communications for 5G cellular networks. In this paper, we present the resource allocation algorithm to maximize the proportional fairness (PF) spectral efficiency under the per subchannel power and the beamforming rank constraints. Two heuristic algorithms are designed. The proportional fairness hybrid beamforming algorithm provides the transmit precoder with a proportional fair spectral efficiency among users for the desired number of radio-frequency (RF) chains. Then, we transform the number of RF chains or rank constrained optimization problem into convex semidefinite programming (SDP) problem, which can be solved by standard techniques. Inspired by the formulated convex SDP problem, a low-complexity, two-step, PF-relaxed optimization algorithm has been provided for the formulated convex optimization problem. Simulation results show that the proposed suboptimal solution to the relaxed optimization problem is near-optimal for the signal-to-noise ratio SNR <= 10 dB and has a performance gap not greater than 2.33 b/s/Hz within the SNR range 0-25 dB. It also outperforms the maximum throughput and PF-based hybrid beamforming schemes for sum spectral efficiency, individual spectral efficiency, and fairness index

    A new perspective on low-rank optimization

    Get PDF
    A key question in many low-rank problems throughout optimization, machine learning, and statistics is to characterize the convex hulls of simple low-rank sets and judiciously apply these convex hulls to obtain strong yet computationally tractable relaxations. We invoke the matrix perspective function the matrix analog of the perspective function to characterize explicitly the convex hull of epigraphs of simple matrix convex functions under low-rank constraints. Further, we combine the matrix perspective function with orthogonal projection matrices{the matrix analog of binary variables which capture the row-space of a matrix{to develop a matrix perspective reformulation technique that reliably obtains strong relaxations for a variety of low-rank problems, including reduced rank regression, non-negative matrix factorization, and factor analysis. Moreover, we establish that these relaxations can be modeled via semidenite constraints and thus optimized over tractably. The proposed approach parallels and generalizes the perspective reformulation technique in mixed-integer optimization and leads to new relaxations for a broad class of problems

    Reduced Complexity Filtering with Stochastic Dominance Bounds: A Convex Optimization Approach

    Full text link
    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds

    Hidden convexity, optimization, and algorithms on rotation matrices

    Full text link
    This paper studies hidden convexity properties associated with constrained optimization problems over the set of rotation matrices SO(n)\text{SO}(n). Such problems are nonconvex due to the constraint X∈SO(n)X \in \text{SO}(n). Nonetheless, we show that certain linear images of SO(n)\text{SO}(n) are convex, opening up the possibility for convex optimization algorithms with provable guarantees for these problems. Our main technical contributions show that any two-dimensional image of SO(n)\text{SO}(n) is convex and that the projection of SO(n)\text{SO}(n) onto its strict upper triangular entries is convex. These results allow us to construct exact convex reformulations for constrained optimization problems over SO(n)\text{SO}(n) with a single constraint or with constraints defined by low-rank matrices. Both of these results are optimal in a formal sense
    • …
    corecore