4 research outputs found

    Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods

    Get PDF
    This thesis studies the theory and implementation of infeasible-start primal-dual interior-point methods for convex optimization problems. Convex optimization has applications in many fields of engineering and science such as data analysis, control theory, signal processing, relaxation and randomization, and robust optimization. In addition to strong and elegant theories, the potential for creating efficient and robust software has made convex optimization very popular. Primal-dual algorithms have yielded efficient solvers for convex optimization problems in conic form over symmetric cones (linear-programming (LP), second-order cone programming (SOCP), and semidefinite programing (SDP)). However, many other highly demanded convex optimization problems lack comparable solvers. To close this gap, we have introduced a general optimization setup, called \emph{Domain-Driven}, that covers many interesting classes of optimization. Domain-Driven means our techniques are directly applied to the given ``good" formulation without a forced reformulation in a conic form. Moreover, this approach also naturally handles the cone constraints and hence the conic form. A problem is in the Domain-Driven setup if it can be formulated as minimizing a linear function over a convex set, where the convex set is equipped with an efficient self-concordant barrier with an easy-to-evaluate Legendre-Fenchel conjugate. We show how general this setup is by providing several interesting classes of examples. LP, SOCP, and SDP are covered by the Domain-Driven setup. More generally, consider all convex cones with the property that both the cone and its dual admit efficiently computable self-concordant barriers. Then, our Domain-Driven setup can handle any conic optimization problem formulated using direct sums of these cones and their duals. Then, we show how to construct interesting convex sets as the direct sum of the epigraphs of univariate convex functions. This construction, as a special case, contains problems such as geometric programming, pp-norm optimization, and entropy programming, the solutions of which are in great demand in engineering and science. Another interesting class of convex sets that (optimization over it) is contained in the Domain-Driven setup is the generalized epigraph of a matrix norm. This, as a special case, allows us to minimize the nuclear norm over a linear subspace that has applications in machine learning and big data. Domain-Driven setup contains the combination of all the above problems; for example, we can have a problem with LP and SDP constraints, combined with ones defined by univariate convex functions or the epigraph of a matrix norm. We review the literature on infeasible-start algorithms and discuss the pros and cons of different methods to show where our algorithms stand among them. This thesis contains a chapter about several properties for self-concordant functions. Since we are dealing with general convex sets, many of these properties are used frequently in the design and analysis of our algorithms. We introduce a notion of duality gap for the Domain-Driven setup that reduces to the conventional duality gap if the problem is a conic optimization problem, and prove some general results. Then, to solve our problems, we construct infeasible-start primal-dual central paths. A critical part in achieving the current best iteration complexity bounds is designing algorithms that follow the path efficiently. The algorithms we design are predictor-corrector algorithms. Determining the status of a general convex optimization problem (as being unbounded, infeasible, having optimal solutions, etc.) is much more complicated than that of LP. We classify the possible status (seven possibilities) for our problem as: solvable, strictly primal-dual feasible, strictly and strongly primal infeasible, strictly and strongly primal unbounded, and ill-conditioned. We discuss the certificates our algorithms return (heavily relying on duality) for each of these cases and analyze the number of iterations required to return such certificates. For infeasibility and unboundedness, we define a weak and a strict detector. We prove that our algorithms return these certificates (solve the problem) in polynomial time, with the current best theoretical complexity bounds. The complexity results are new for the infeasible-start models used. The different patterns that can be detected by our algorithms and the iteration complexity bounds for them are comparable to the current best results available for infeasible-start conic optimization, which to the best of our knowledge is the work of Nesterov-Todd-Ye (1999). In the applications, computation, and software front, based on our algorithms, we created a Matlab-based code, called DDS, that solves a large class of problems including LP, SOCP, SDP, quadratically-constrained quadratic programming (QCQP), geometric programming, entropy programming, and more can be added. Even though the code is not finalized, this chapter shows a glimpse of possibilities. The generality of the code lets us solve problems that CVX (a modeling system for convex optimization) does not even recognize as convex. The DDS code accepts constraints representing the epigraph of a matrix norm, which, as we mentioned, covers minimizing the nuclear norm over a linear subspace. For acceptable classes of convex optimization problems, we explain the format of the input. We give the formula for computing the gradient and Hessian of the corresponding self-concordant barriers and their Legendre-Fenchel conjugates, and discuss the methods we use to compute them efficiently and robustly. We present several numerical results of applying the DDS code to our constructed examples and also problems from well-known libraries such as the DIMACS library of mixed semidefinite-quadratic-linear programs. We also discuss different numerical challenges and our approaches for removing them

    Variational Spectral Analysis

    Get PDF
    We present results on smooth and nonsmooth variational properties of {it symmetric} functions of the eigenvalues of a real symmetric matrix argument, as well as {it absolutely symmetric} functions of the singular values of a real rectangular matrix. Such results underpin the theory of optimization problems involving such functions. We answer the question of when a symmetric function of the eigenvalues allows a quadratic expansion around a matrix, and then the stronger question of when it is twice differentiable. We develop simple formulae for the most important nonsmooth subdifferentials of functions depending on the singular values of a real rectangular matrix argument and give several examples. The analysis of the above two classes of functions may be generalized in various larger abstract frameworks. In particular, we investigate how functions depending on the eigenvalues or the singular values of a matrix argument may be viewed as the composition of symmetric functions with the roots of {it hyperbolic polynomials}. We extend the relationship between hyperbolic polynomials and {it self-concordant barriers} (an extremely important class of functions in contemporary interior point methods for convex optimization) by exhibiting a new class of self-concordant barriers obtainable from hyperbolic polynomials
    corecore