174 research outputs found

    An interior-point method for the single-facility location problem with mixed norms using a conic formulation

    Get PDF
    We consider the single-facility location problem with mixed norms, i.e. the problem of minimizing the sum of the distances from a point to a set of fixed points in R, where each distance can be measured according to a different p-norm.We show how this problem can be expressed into a structured conic format by decomposing the nonlinear components of the objective into a series of constraints involving three-dimensional cones. Using the availability of a self-concordant barrier for these cones, we present a polynomial-time algorithm (a long-step path-following interior-point scheme) to solve the problem up to a given accuracy. Finally, we report computational results for this algorithm and compare with standard nonlinear optimization solvers applied to this problem.nonsymmetric conic optimization, conic reformulation, convex optimization, sum of norm minimization, single-facility location problems, interior-point methods

    Projectively self-concordant barriers on convex sets

    Full text link
    Self-concordance is the most important property required for barriers in convex programming. We introduce an alternative, stronger notion, which we call projective self-concordance, define the corresponding Dikin sets by a quadratic inequality, and develop a corresponding duality theory. Our notion is equivariant with respect to the group of projective transformations, which is larger than the affine group corresponding to the classical notion. Our Dikin sets are larger than the classical Dikin ellipsoids, depend on the gradient of the barrier at the center point, are non-symmetric, and may even be unbounded. From the derivatives of the barrier at a given point we construct a quadratic set which overbounds the underlying convex set, which is not possible for the classical notion of self-concordance. This opens the way to design algorithms which make larger steps and hence have a faster convergence rate than traditional interior-point methods. We give many examples of convex sets with projectively self-concordant barriers

    Volumetric Spanners: an Efficient Exploration Basis for Learning

    Full text link
    Numerous machine learning problems require an exploration basis - a mechanism to explore the action space. We define a novel geometric notion of exploration basis with low variance, called volumetric spanners, and give efficient algorithms to construct such a basis. We show how efficient volumetric spanners give rise to the first efficient and optimal regret algorithm for bandit linear optimization over general convex sets. Previously such results were known only for specific convex sets, or under special conditions such as the existence of an efficient self-concordant barrier for the underlying set

    Interior Point Methods with a Gradient Oracle

    Full text link
    We provide an interior point method based on quasi-Newton iterations, which only requires first-order access to a strongly self-concordant barrier function. To achieve this, we extend the techniques of Dunagan-Harvey [STOC '07] to maintain a preconditioner, while using only first-order information. We measure the quality of this preconditioner in terms of its relative excentricity to the unknown Hessian matrix, and we generalize these techniques to convex functions with a slowly-changing Hessian. We combine this with an interior point method to show that, given first-order access to an appropriate barrier function for a convex set KK, we can solve well-conditioned linear optimization problems over KK to ε\varepsilon precision in time O~((T+n2)nνlog(1/ε))\widetilde{O}\left(\left(\mathcal{T}+n^{2}\right)\sqrt{n\nu}\log\left(1/\varepsilon\right)\right), where ν\nu is the self-concordance parameter of the barrier function, and T\mathcal{T} is the time required to make a gradient query. As a consequence we show that: \bullet Linear optimization over nn-dimensional convex sets can be solved in time O~((Tn+n3)log(1/ε))\widetilde{O}\left(\left(\mathcal{T}n+n^{3}\right)\log\left(1/\varepsilon\right)\right). This parallels the running time achieved by state of the art algorithms for cutting plane methods, when replacing separation oracles with first-order oracles for an appropriate barrier function. \bullet We can solve semidefinite programs involving mnm\geq n matrices in Rn×n\mathbb{R}^{n\times n} in time O~(mn4+m1.25n3.5log(1/ε))\widetilde{O}\left(mn^{4}+m^{1.25}n^{3.5}\log\left(1/\varepsilon\right)\right), improving over the state of the art algorithms, in the case where m=Ω(n3.5ω1.25)m=\Omega\left(n^{\frac{3.5}{\omega-1.25}}\right). Along the way we develop a host of tools allowing us to control the evolution of our potential functions, using techniques from matrix analysis and Schur convexity.Comment: STOC 202

    Interior point methods and simulated annealing for nonsymmetric conic optimization

    Get PDF
    This thesis explores four methods for convex optimization. The first two are an interior point method and a simulated annealing algorithm that share a theoretical foundation. This connection is due to the interior point method’s use of the so-called entropic barrier, whose derivatives can be approximated through sampling. Here, the sampling will be carried out with a technique known as hit-and-run. By carefully analyzing the properties of hit-and-run sampling, it is shown that both the interior point method and the simulated annealing algorithm can solve a convex optimization problem in the membership oracle setting. The number of oracle calls made by these methods is bounded by a polynomial in the input size. The third method is an analytic center cutting plane method that shows promising performance for copositive optimization. It outperforms the first two methods by a significant margin on the problem of separating a matrix from the completely positive cone. The final method is based on Mosek’s algorithm for nonsymmetric conic optimization. With their scaling matrix, search direction, and neighborhood, we define a method that converges to a near-optimal solution in polynomial time
    corecore