1,008 research outputs found

    Subsampling Algorithms for Semidefinite Programming

    Full text link
    We derive a stochastic gradient algorithm for semidefinite optimization using randomization techniques. The algorithm uses subsampling to reduce the computational cost of each iteration and the subsampling ratio explicitly controls granularity, i.e. the tradeoff between cost per iteration and total number of iterations. Furthermore, the total computational cost is directly proportional to the complexity (i.e. rank) of the solution. We study numerical performance on some large-scale problems arising in statistical learning.Comment: Final version, to appear in Stochastic System

    Primal and dual active-set methods for convex quadratic programming

    Full text link
    Computational methods are proposed for solving a convex quadratic program (QP). Active-set methods are defined for a particular primal and dual formulation of a QP with general equality constraints and simple lower bounds on the variables. In the first part of the paper, two methods are proposed, one primal and one dual. These methods generate a sequence of iterates that are feasible with respect to the equality constraints associated with the optimality conditions of the primal-dual form. The primal method maintains feasibility of the primal inequalities while driving the infeasibilities of the dual inequalities to zero. The dual method maintains feasibility of the dual inequalities while moving to satisfy the primal inequalities. In each of these methods, the search directions satisfy a KKT system of equations formed from Hessian and constraint components associated with an appropriate column basis. The composition of the basis is specified by an active-set strategy that guarantees the nonsingularity of each set of KKT equations. Each of the proposed methods is a conventional active-set method in the sense that an initial primal- or dual-feasible point is required. In the second part of the paper, it is shown how the quadratic program may be solved as a coupled pair of primal and dual quadratic programs created from the original by simultaneously shifting the simple-bound constraints and adding a penalty term to the objective function. Any conventional column basis may be made optimal for such a primal-dual pair of shifted-penalized problems. The shifts are then updated using the solution of either the primal or the dual shifted problem. An obvious application of this approach is to solve a shifted dual QP to define an initial feasible point for the primal (or vice versa). The computational performance of each of the proposed methods is evaluated on a set of convex problems.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/s10107-015-0966-

    Smooth Optimization with Approximate Gradient

    Full text link
    We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.Comment: Titled changed from "Smooth Optimization for Sparse Semidefinite Programs". New figures, tests. Final versio

    On the low-rank approach for semidefinite programs arising in synchronization and community detection

    Full text link
    To address difficult optimization problems, convex relaxations based on semidefinite programming are now common place in many fields. Although solvable in polynomial time, large semidefinite programs tend to be computationally challenging. Over a decade ago, exploiting the fact that in many applications of interest the desired solutions are low rank, Burer and Monteiro proposed a heuristic to solve such semidefinite programs by restricting the search space to low-rank matrices. The accompanying theory does not explain the extent of the empirical success. We focus on Synchronization and Community Detection problems and provide theoretical guarantees shedding light on the remarkable efficiency of this heuristic.Comment: 22 pages, Proceedings of The 29th Conference on Learning Theory (COLT), New York, NY, June 23-26, 201

    A direct formulation for sparse PCA using semidefinite programming

    Full text link
    We examine the problem of approximating, in the Frobenius-norm sense, a positive, semidefinite symmetric matrix by a rank-one matrix, with an upper bound on the cardinality of its eigenvector. The problem arises in the decomposition of a covariance matrix into sparse factors, and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming based relaxation for our problem. We also discuss Nesterov's smooth minimization technique applied to the SDP arising in the direct sparse PCA method.Comment: Final version, to appear in SIAM revie

    Backscatter analysis based algorithms for increasing transmission through highly-scattering random media using phase-only modulated wavefronts

    Full text link
    Recent theoretical and experimental advances have shed light on the existence of so-called `perfectly transmitting' wavefronts with transmission coefficients close to 1 in strongly backscattering random media. These perfectly transmitting eigen-wavefronts can be synthesized by spatial amplitude and phase modulation. Here, we consider the problem of transmission enhancement using phase-only modulated wavefronts. We develop physically realizable iterative and non-iterative algorithms for increasing the transmission through such random media using backscatter analysis. We theoretically show that, despite the phase-only modulation constraint, the non-iterative algorithms will achieve at least about 25Ď€\pi% or about 78.5% transmission assuming there is at least one perfectly transmitting eigen-wavefront and that the singular vectors of the transmission matrix obey a maximum entropy principle so that they are isotropically random. We numerically analyze the limits of phase-only modulated transmission in 2-D with fully spectrally accurate simulators and provide rigorous numerical evidence confirming our theoretical prediction in random media with periodic boundary conditions that is composed of hundreds of thousands of non-absorbing scatterers. We show via numerical simulations that the iterative algorithms we have developed converge rapidly, yielding highly transmitting wavefronts using relatively few measurements of the backscatter field. Specifically, the best performing iterative algorithm yields approx 70% transmission using just 15-20 measurements in the regime where the non-iterative algorithms yield approximately 78.5% transmission but require measuring the entire modal reflection matrix.Comment: Revised version contains results of additional simulation

    Sparse PCA: Convex Relaxations, Algorithms and Applications

    Full text link
    Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. Unfortunately, this problem is also combinatorially hard and we discuss convex relaxation techniques that efficiently produce good approximate solutions. We then describe several algorithms solving these relaxations as well as greedy algorithms that iteratively improve the solution quality. Finally, we illustrate sparse PCA in several applications, ranging from senate voting and finance to news data.Comment: To appear in "Handbook on Semidefinite, Cone and Polynomial Optimization", M. Anjos and J.B. Lasserre, editors. This revision includes ROC curves for greedy algorithm

    A First-order Method for Monotone Stochastic Variational Inequalities on Semidefinite Matrix Spaces

    Full text link
    Motivated by multi-user optimization problems and non-cooperative Nash games in stochastic regimes, we consider stochastic variational inequality (SVI) problems on matrix spaces where the variables are positive semidefinite matrices and the mapping is merely monotone. Much of the interest in the theory of variational inequality (VI) has focused on addressing VIs on vector spaces.Yet, most existing methods either rely on strong assumptions, or require a two-loop framework where at each iteration, a projection problem, i.e., a semidefinite optimization problem needs to be solved. Motivated by this gap, we develop a stochastic mirror descent method where we choose the distance generating function to be defined as the quantum entropy. This method is a single-loop first-order method in the sense that it only requires a gradient-type of update at each iteration. The novelty of this work lies in the convergence analysis that is carried out through employing an auxiliary sequence of stochastic matrices. Our contribution is three-fold: (i) under this setting and employing averaging techniques, we show that the iterate generated by the algorithm converges to a weak solution of the SVI; (ii) moreover, we derive a convergence rate in terms of the expected value of a suitably defined gap function; (iii) we implement the developed method for solving a multiple-input multiple-output multi-cell cellular wireless network composed of seven hexagonal cells and present the numerical experiments supporting the convergence of the proposed method

    Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques

    Get PDF
    This paper investigates experimental means of measuring the transmission matrix (TM) of a highly scattering medium, with the simplest optical setup. Spatial light modulation is performed by a digital micromirror device (DMD), allowing high rates and high pixel counts but only binary amplitude modulation. We used intensity measurement only, thus avoiding the need for a reference beam. Therefore, the phase of the TM has to be estimated through signal processing techniques of phase retrieval. Here, we compare four different phase retrieval principles on noisy experimental data. We validate our estimations of the TM on three criteria : quality of prediction, distribution of singular values, and quality of focusing. Results indicate that Bayesian phase retrieval algorithms with variational approaches provide a good tradeoff between the computational complexity and the precision of the estimates

    Convex Optimization without Projection Steps

    Full text link
    For the general problem of minimizing a convex function over a compact convex domain, we will investigate a simple iterative approximation algorithm based on the method by Frank & Wolfe 1956, that does not need projection steps in order to stay inside the optimization domain. Instead of a projection step, the linearized problem defined by a current subgradient is solved, which gives a step direction that will naturally stay in the domain. Our framework generalizes the sparse greedy algorithm of Frank & Wolfe and its primal-dual analysis by Clarkson 2010 (and the low-rank SDP approach by Hazan 2008) to arbitrary convex domains. We give a convergence proof guaranteeing {\epsilon}-small duality gap after O(1/{\epsilon}) iterations. The method allows us to understand the sparsity of approximate solutions for any l1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation quality. We obtain matching upper and lower bounds of {\Theta}(1/{\epsilon}) for the sparsity for l1-problems. The same bounds apply to low-rank semidefinite optimization with bounded trace, showing that rank O(1/{\epsilon}) is best possible here as well. As another application, we obtain sparse matrices of O(1/{\epsilon}) non-zero entries as {\epsilon}-approximate solutions when optimizing any convex function over a class of diagonally dominant symmetric matrices. We show that our proposed first-order method also applies to nuclear norm and max-norm matrix optimization problems. For nuclear norm regularized optimization, such as matrix completion and low-rank recovery, we demonstrate the practical efficiency and scalability of our algorithm for large matrix problems, as e.g. the Netflix dataset. For general convex optimization over bounded matrix max-norm, our algorithm is the first with a convergence guarantee, to the best of our knowledge
    • …
    corecore