26,350 research outputs found

    A polynomial-time inexact interior-point method for convex quadratic symmetric cone programming

    Get PDF
    Abstract. In this paper, we design an inexact primal-dual infeasible path-following algorithm for convex quadratic programming over symmetric cones. Our algorithm and its polynomial iteration complexity analysis give a unified treatment for a number of previous algorithms and their complexity analysis. In particular, our algorithm and analysis includes the one designed for linear semidefinite programming in "Math. Prog. 99 (2004), pp. 261-282". Under a mild condition on the inexactness of the search direction at each interior-point iteration, we show that the algorithm can find an ϵ-approximate solution in O(n 2 log(1/ϵ)) iterations, where n is the rank of the underlying Euclidean Jordan algebra

    Convex Combinatorial Optimization

    Full text link
    We introduce the convex combinatorial optimization problem, a far reaching generalization of the standard linear combinatorial optimization problem. We show that it is strongly polynomial time solvable over any edge-guaranteed family, and discuss several applications

    Strongly polynomial algorithm for a class of minimum-cost flow problems with separable convex objectives

    Get PDF
    A well-studied nonlinear extension of the minimum-cost flow problem is to minimize the objective ijECij(fij)\sum_{ij\in E} C_{ij}(f_{ij}) over feasible flows ff, where on every arc ijij of the network, CijC_{ij} is a convex function. We give a strongly polynomial algorithm for the case when all CijC_{ij}'s are convex quadratic functions, settling an open problem raised e.g. by Hochbaum [1994]. We also give strongly polynomial algorithms for computing market equilibria in Fisher markets with linear utilities and with spending constraint utilities, that can be formulated in this framework (see Shmyrev [2009], Devanur et al. [2011]). For the latter class this resolves an open question raised by Vazirani [2010]. The running time is O(m4logm)O(m^4\log m) for quadratic costs, O(n4+n2(m+nlogn)logn)O(n^4+n^2(m+n\log n)\log n) for Fisher's markets with linear utilities and O(mn3+m2(m+nlogn)logm)O(mn^3 +m^2(m+n\log n)\log m) for spending constraint utilities. All these algorithms are presented in a common framework that addresses the general problem setting. Whereas it is impossible to give a strongly polynomial algorithm for the general problem even in an approximate sense (see Hochbaum [1994]), we show that assuming the existence of certain black-box oracles, one can give an algorithm using a strongly polynomial number of arithmetic operations and oracle calls only. The particular algorithms can be derived by implementing these oracles in the respective settings

    An Algorithmic Theory of Integer Programming

    Full text link
    We study the general integer programming problem where the number of variables nn is a variable part of the input. We consider two natural parameters of the constraint matrix AA: its numeric measure aa and its sparsity measure dd. We show that integer programming can be solved in time g(a,d)poly(n,L)g(a,d)\textrm{poly}(n,L), where gg is some computable function of the parameters aa and dd, and LL is the binary encoding length of the input. In particular, integer programming is fixed-parameter tractable parameterized by aa and dd, and is solvable in polynomial time for every fixed aa and dd. Our results also extend to nonlinear separable convex objective functions. Moreover, for linear objectives, we derive a strongly-polynomial algorithm, that is, with running time g(a,d)poly(n)g(a,d)\textrm{poly}(n), independent of the rest of the input data. We obtain these results by developing an algorithmic framework based on the idea of iterative augmentation: starting from an initial feasible solution, we show how to quickly find augmenting steps which rapidly converge to an optimum. A central notion in this framework is the Graver basis of the matrix AA, which constitutes a set of fundamental augmenting steps. The iterative augmentation idea is then enhanced via the use of other techniques such as new and improved bounds on the Graver basis, rapid solution of integer programs with bounded variables, proximity theorems and a new proximity-scaling algorithm, the notion of a reduced objective function, and others. As a consequence of our work, we advance the state of the art of solving block-structured integer programs. In particular, we develop near-linear time algorithms for nn-fold, tree-fold, and 22-stage stochastic integer programs. We also discuss some of the many applications of these classes.Comment: Revision 2: - strengthened dual treedepth lower bound - simplified proximity-scaling algorith

    An update on the Hirsch conjecture

    Get PDF
    The Hirsch conjecture was posed in 1957 in a letter from Warren M. Hirsch to George Dantzig. It states that the graph of a d-dimensional polytope with n facets cannot have diameter greater than n - d. Despite being one of the most fundamental, basic and old problems in polytope theory, what we know is quite scarce. Most notably, no polynomial upper bound is known for the diameters that are conjectured to be linear. In contrast, very few polytopes are known where the bound ndn-d is attained. This paper collects known results and remarks both on the positive and on the negative side of the conjecture. Some proofs are included, but only those that we hope are accessible to a general mathematical audience without introducing too many technicalities.Comment: 28 pages, 6 figures. Many proofs have been taken out from version 2 and put into the appendix arXiv:0912.423

    Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

    Full text link
    This paper explores a surprising equivalence between two seemingly-distinct convex optimization methods. We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function. This connection exhibits several benefits. First, we are able improve the state of the art time complexity for convex optimization under the membership oracle model. We improve the analysis of the randomized algorithm of Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that underly the central path following interior point algorithm. We are able to tighten the temperature schedule for simulated annealing which gives an improved running time, reducing by square root of the dimension in certain instances. Second, we get an efficient randomized interior point method with an efficiently computable universal barrier for any convex set described by a membership oracle. Previously, efficiently computable barriers were known only for particular convex sets
    corecore