20,653 research outputs found

    On the Burer-Monteiro method for general semidefinite programs

    Full text link
    Consider a semidefinite program (SDP) involving an n×nn\times n positive semidefinite matrix XX. The Burer-Monteiro method uses the substitution X=YYTX=Y Y^T to obtain a nonconvex optimization problem in terms of an n×pn\times p matrix YY. Boumal et al. showed that this nonconvex method provably solves equality-constrained SDPs with a generic cost matrix when p2mp \gtrsim \sqrt{2m}, where mm is the number of constraints. In this note we extend their result to arbitrary SDPs, possibly involving inequalities or multiple semidefinite constraints. We derive similar guarantees for a fixed cost matrix and generic constraints. We illustrate applications to matrix sensing and integer quadratic minimization.Comment: 10 page

    Stochastic Frank-Wolfe Methods for Nonconvex Optimization

    Full text link
    We study Frank-Wolfe methods for nonconvex stochastic and finite-sum optimization problems. Frank-Wolfe methods (in the convex case) have gained tremendous recent interest in machine learning and optimization communities due to their projection-free property and their ability to exploit structured constraints. However, our understanding of these algorithms in the nonconvex setting is fairly limited. In this paper, we propose nonconvex stochastic Frank-Wolfe methods and analyze their convergence properties. For objective functions that decompose into a finite-sum, we leverage ideas from variance reduction techniques for convex optimization to obtain new variance reduced nonconvex Frank-Wolfe methods that have provably faster convergence than the classical Frank-Wolfe method. Finally, we show that the faster convergence rates of our variance reduced methods also translate into improved convergence rates for the stochastic setting

    Hidden Convexity in Partially Separable Optimization

    Get PDF
    The paper identifies classes of nonconvex optimization problems whose convex relaxations have optimal solutions which at the same time are global optimal solutions of the original nonconvex problems. Such a hidden convexity property was so far limited to quadratically constrained quadratic problems with one or two constraints. We extend it here to problems with some partial separable structure. Among other things, the new hidden convexity results open up the possibility to solve multi-stage robust optimization problems using certain nonlinear decision rules.convex relaxation of nonconvex problems;hidden convexity;partially separable functions;robust optimization

    SWIPT techniques for multiuser MIMO broadcast systems

    Get PDF
    In this paper, we present an approach to solve the nonconvex optimization problem that arises when designing the transmit covariance matrices in multiuser multiple-input multiple-output (MIMO) broadcast networks implementing simultaneous wireless information and power transfer (SWIPT). The MIMO SWIPT design is formulated as a nonconvex optimization problem in which system sum rate is optimized considering per-user harvesting constraints. Two different approaches are proposed. The first approach is based on a classical gradient-based method for constrained optimization. The second approach is based on difference of convex (DC) programming. The idea behind this approach is to obtain a convex function that approximates the nonconvex objective and, then, solve a series of convex subproblems that, eventually, will provide a (locally) optimum solution of the general nonconvex problem. The solution obtained from the proposed approach is compared to the classical block-diagonalization (BD) strategy, typically used to solve the nonconvex multiuser MIMO network by forcing no inter-user interference. Simulation results show that the proposed approach improves both the system sum rate and the power harvested by users simultaneously. In terms of computational time, the proposed DC programming outperforms the classical gradient methods.Peer ReviewedPostprint (author's final draft