6 research outputs found

    A note on the use of vector barrier parameters for interior-point methods

    Get PDF
    A key feature to ensure desirable convergence properties in an interior point method is the appropriate choice of an updating rule for the barrier parameter. In this work we analyze and describe updating rules based on the use of a vector of barrier parameters. We show that these updating rules are well defined and satisfy sufficient conditions to ensure con vergence to the correct limit points. We also present some numerical results that illustrate the improved performance of these strategies compared to the use of a scalar barrier parameter.Publicad

    Multi-Parameter Surfaces of Analytic Centers and Long-Step Surface-Following Interior Point Methods

    No full text
    We develop a long-step polynomial time version of the Method of Analytic Centers for non-linear convex problems. The method traces a multi-parameter surface of analytic centers rather than the usual path, which allows to handle cases with noncentered and possibly infeasible starting point

    Interior-point algorithms for convex optimization based on primal-dual metrics

    Full text link
    We propose and analyse primal-dual interior-point algorithms for convex optimization problems in conic form. The families of algorithms we analyse are so-called short-step algorithms and they match the current best iteration complexity bounds for primal-dual symmetric interior-point algorithm of Nesterov and Todd, for symmetric cone programming problems with given self-scaled barriers. Our results apply to any self-concordant barrier for any convex cone. We also prove that certain specializations of our algorithms to hyperbolic cone programming problems (which lie strictly between symmetric cone programming and general convex optimization problems in terms of generality) can take advantage of the favourable special structure of hyperbolic barriers. We make new connections to Riemannian geometry, integrals over operator spaces, Gaussian quadrature, and strengthen the connection of our algorithms to quasi-Newton updates and hence first-order methods in general.Comment: 36 page

    Convex Optimization via Domain-Driven Barriers and Primal-Dual Interior-Point Methods

    Get PDF
    This thesis studies the theory and implementation of infeasible-start primal-dual interior-point methods for convex optimization problems. Convex optimization has applications in many fields of engineering and science such as data analysis, control theory, signal processing, relaxation and randomization, and robust optimization. In addition to strong and elegant theories, the potential for creating efficient and robust software has made convex optimization very popular. Primal-dual algorithms have yielded efficient solvers for convex optimization problems in conic form over symmetric cones (linear-programming (LP), second-order cone programming (SOCP), and semidefinite programing (SDP)). However, many other highly demanded convex optimization problems lack comparable solvers. To close this gap, we have introduced a general optimization setup, called \emph{Domain-Driven}, that covers many interesting classes of optimization. Domain-Driven means our techniques are directly applied to the given ``good" formulation without a forced reformulation in a conic form. Moreover, this approach also naturally handles the cone constraints and hence the conic form. A problem is in the Domain-Driven setup if it can be formulated as minimizing a linear function over a convex set, where the convex set is equipped with an efficient self-concordant barrier with an easy-to-evaluate Legendre-Fenchel conjugate. We show how general this setup is by providing several interesting classes of examples. LP, SOCP, and SDP are covered by the Domain-Driven setup. More generally, consider all convex cones with the property that both the cone and its dual admit efficiently computable self-concordant barriers. Then, our Domain-Driven setup can handle any conic optimization problem formulated using direct sums of these cones and their duals. Then, we show how to construct interesting convex sets as the direct sum of the epigraphs of univariate convex functions. This construction, as a special case, contains problems such as geometric programming, pp-norm optimization, and entropy programming, the solutions of which are in great demand in engineering and science. Another interesting class of convex sets that (optimization over it) is contained in the Domain-Driven setup is the generalized epigraph of a matrix norm. This, as a special case, allows us to minimize the nuclear norm over a linear subspace that has applications in machine learning and big data. Domain-Driven setup contains the combination of all the above problems; for example, we can have a problem with LP and SDP constraints, combined with ones defined by univariate convex functions or the epigraph of a matrix norm. We review the literature on infeasible-start algorithms and discuss the pros and cons of different methods to show where our algorithms stand among them. This thesis contains a chapter about several properties for self-concordant functions. Since we are dealing with general convex sets, many of these properties are used frequently in the design and analysis of our algorithms. We introduce a notion of duality gap for the Domain-Driven setup that reduces to the conventional duality gap if the problem is a conic optimization problem, and prove some general results. Then, to solve our problems, we construct infeasible-start primal-dual central paths. A critical part in achieving the current best iteration complexity bounds is designing algorithms that follow the path efficiently. The algorithms we design are predictor-corrector algorithms. Determining the status of a general convex optimization problem (as being unbounded, infeasible, having optimal solutions, etc.) is much more complicated than that of LP. We classify the possible status (seven possibilities) for our problem as: solvable, strictly primal-dual feasible, strictly and strongly primal infeasible, strictly and strongly primal unbounded, and ill-conditioned. We discuss the certificates our algorithms return (heavily relying on duality) for each of these cases and analyze the number of iterations required to return such certificates. For infeasibility and unboundedness, we define a weak and a strict detector. We prove that our algorithms return these certificates (solve the problem) in polynomial time, with the current best theoretical complexity bounds. The complexity results are new for the infeasible-start models used. The different patterns that can be detected by our algorithms and the iteration complexity bounds for them are comparable to the current best results available for infeasible-start conic optimization, which to the best of our knowledge is the work of Nesterov-Todd-Ye (1999). In the applications, computation, and software front, based on our algorithms, we created a Matlab-based code, called DDS, that solves a large class of problems including LP, SOCP, SDP, quadratically-constrained quadratic programming (QCQP), geometric programming, entropy programming, and more can be added. Even though the code is not finalized, this chapter shows a glimpse of possibilities. The generality of the code lets us solve problems that CVX (a modeling system for convex optimization) does not even recognize as convex. The DDS code accepts constraints representing the epigraph of a matrix norm, which, as we mentioned, covers minimizing the nuclear norm over a linear subspace. For acceptable classes of convex optimization problems, we explain the format of the input. We give the formula for computing the gradient and Hessian of the corresponding self-concordant barriers and their Legendre-Fenchel conjugates, and discuss the methods we use to compute them efficiently and robustly. We present several numerical results of applying the DDS code to our constructed examples and also problems from well-known libraries such as the DIMACS library of mixed semidefinite-quadratic-linear programs. We also discuss different numerical challenges and our approaches for removing them
    corecore