210 research outputs found

    SOS-convex Semi-algebraic Programs and its Applications to Robust Optimization: A Tractable Class of Nonsmooth Convex Optimization

    Get PDF
    In this paper, we introduce a new class of nonsmooth convex functions called SOS-convex semialgebraic functions extending the recently proposed notion of SOS-convex polynomials. This class of nonsmooth convex functions covers many common nonsmooth functions arising in the applications such as the Euclidean norm, the maximum eigenvalue function and the least squares functions with â„“1\ell_1-regularization or elastic net regularization used in statistics and compressed sensing. We show that, under commonly used strict feasibility conditions, the optimal value and an optimal solution of SOS-convex semi-algebraic programs can be found by solving a single semi-definite programming problem (SDP). We achieve the results by using tools from semi-algebraic geometry, convex-concave minimax theorem and a recently established Jensen inequality type result for SOS-convex polynomials. As an application, we outline how the derived results can be applied to show that robust SOS-convex optimization problems under restricted spectrahedron data uncertainty enjoy exact SDP relaxations. This extends the existing exact SDP relaxation result for restricted ellipsoidal data uncertainty and answers the open questions left in [Optimization Letters 9, 1-18(2015)] on how to recover a robust solution from the semi-definite programming relaxation in this broader setting

    Signomial and Polynomial Optimization via Relative Entropy and Partial Dualization

    Get PDF
    We describe a generalization of the Sums-of-AM/GM Exponential (SAGE) relaxation methodology for obtaining bounds on constrained signomial and polynomial optimization problems. Our approach leverages the fact that relative entropy based SAGE certificates conveniently and transparently blend with convex duality, in a manner that Sums-of-Squares certificates do not. This more general approach not only retains key properties of ordinary SAGE relaxations (e.g. sparsity preservation), but also inspires a novel perspective-based method of solution recovery. We illustrate the utility of our methodology with a range of examples from the global optimization literature, along with a publicly available software package.Comment: Software at https://rileyjmurray.github.io/sageopt/. Nine tables, one figure. Forty pages (with large margins). Ten pages of computational experiments; print pages 1-25 and 36-40 to skip the computational experiments. Version 2: minor simplification to section 4.2.

    Signomial and polynomial optimization via relative entropy and partial dualization

    Get PDF
    We describe a generalization of the Sums-of-AM/GM-Exponential (SAGE) methodology for relative entropy relaxations of constrained signomial and polynomial optimization problems. Our approach leverages the fact that SAGE certificates conveniently and transparently blend with convex duality, in a way which enables partial dualization of certain structured constraints. This more general approach retains key properties of ordinary SAGE relaxations (e.g. sparsity preservation), and inspires a projective method of solution recovery which respects partial dualization. We illustrate the utility of our methodology with a range of examples from the global optimization literature, along with a publicly available software package

    A Scalable Algorithm For Sparse Portfolio Selection

    Full text link
    The sparse portfolio selection problem is one of the most famous and frequently-studied problems in the optimization and financial economics literatures. In a universe of risky assets, the goal is to construct a portfolio with maximal expected return and minimum variance, subject to an upper bound on the number of positions, linear inequalities and minimum investment constraints. Existing certifiably optimal approaches to this problem do not converge within a practical amount of time at real world problem sizes with more than 400 securities. In this paper, we propose a more scalable approach. By imposing a ridge regularization term, we reformulate the problem as a convex binary optimization problem, which is solvable via an efficient outer-approximation procedure. We propose various techniques for improving the performance of the procedure, including a heuristic which supplies high-quality warm-starts, a preprocessing technique for decreasing the gap at the root node, and an analytic technique for strengthening our cuts. We also study the problem's Boolean relaxation, establish that it is second-order-cone representable, and supply a sufficient condition for its tightness. In numerical experiments, we establish that the outer-approximation procedure gives rise to dramatic speedups for sparse portfolio selection problems.Comment: Submitted to INFORMS Journal on Computin
    • …
    corecore