6,285 research outputs found

    Computing Optimal Experimental Designs via Interior Point Method

    Full text link
    In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose an interior point (IP) method for them and establish its global convergence. Furthermore, by exploiting the structure of the Hessian matrix of the aforementioned optimality criteria, we derive an explicit formula for computing its rank. Using this result, we then show that the Newton direction arising in the IP method can be computed efficiently via Sherman-Morrison-Woodbury formula when the size of the moment matrix is small relative to the sample size. Finally, we compare our IP method with the widely used multiplicative algorithm introduced by Silvey et al. [29]. The computational results show that the IP method generally outperforms the multiplicative algorithm both in speed and solution quality

    Approximate Convex Optimization by Online Game Playing

    Full text link
    Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a ϵ\epsilon approximate solution is proportional to 1ϵ2\frac{1}{\epsilon^2}. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in 1ϵ\frac{1}{\epsilon} iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to 1ϵ\frac{1}{\epsilon}. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest

    Adapting the interior point method for the solution of LPs on serial, coarse grain parallel and massively parallel computers

    Get PDF
    In this paper we describe a unified scheme for implementing an interior point algorithm (IPM) over a range of computer architectures. In the inner iteration of the IPM a search direction is computed using Newton's method. Computationally this involves solving a sparse symmetric positive definite (SSPD) system of equations. The choice of direct and indirect methods for the solution of this system, and the design of data structures to take advantage of serial, coarse grain parallel and massively parallel computer architectures, are considered in detail. We put forward arguments as to why integration of the system within a sparse simplex solver is important and outline how the system is designed to achieve this integration

    A Quantum Interior Point Method for LPs and SDPs

    Full text link
    We present a quantum interior point method with worst case running time O~(n2.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{2.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for SDPs and O~(n1.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{1.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for LPs, where the output of our algorithm is a pair of matrices (S,Y)(S,Y) that are ϵ\epsilon-optimal ξ\xi-approximate SDP solutions. The factor μ\mu is at most 2n\sqrt{2}n for SDPs and 2n\sqrt{2n} for LP's, and κ\kappa is an upper bound on the condition number of the intermediate solution matrices. For the case where the intermediate matrices for the interior point method are well conditioned, our method provides a polynomial speedup over the best known classical SDP solvers and interior point based LP solvers, which have a worst case running time of O(n6)O(n^{6}) and O(n3.5)O(n^{3.5}) respectively. Our results build upon recently developed techniques for quantum linear algebra and pave the way for the development of quantum algorithms for a variety of applications in optimization and machine learning.Comment: 32 page
    corecore