118 research outputs found

    Comparative analysis of the affine scaling and Karmarkarā€™s polynomial ā€“ time for linear programming

    Get PDF
    The simplex method is the well-known, non-polynomial solution technique for linear programming problems. However, some computational testing has shown that the Karmarkarā€™s polynomial projective interior point method may perform better than the simplex method on many classes of problems, especially, on problems with large sizes. The affine scaling algorithm is a variant of the Karmarkarā€™s algorithms. In this paper, we compare the affine scaling and the Karmarkar algorithms using the same test LP problem. Keywords: Polynomial-time, Complexity bound, Primal LP, Dual LP, Basic Solution, Degenerate Solution, Affine Space, Simplex and Polytop

    A Decomposition Algorithm for Nested Resource Allocation Problems

    Full text link
    We propose an exact polynomial algorithm for a resource allocation problem with convex costs and constraints on partial sums of resource consumptions, in the presence of either continuous or integer variables. No assumption of strict convexity or differentiability is needed. The method solves a hierarchy of resource allocation subproblems, whose solutions are used to convert constraints on sums of resources into bounds for separate variables at higher levels. The resulting time complexity for the integer problem is O(nlogā”mlogā”(B/n))O(n \log m \log (B/n)), and the complexity of obtaining an Ļµ\epsilon-approximate solution for the continuous case is O(nlogā”mlogā”(B/Ļµ))O(n \log m \log (B/\epsilon)), nn being the number of variables, mm the number of ascending constraints (such that m<nm < n), Ļµ\epsilon a desired precision, and BB the total resource. This algorithm attains the best-known complexity when m=nm = n, and improves it when logā”m=o(logā”n)\log m = o(\log n). Extensive experimental analyses are conducted with four recent algorithms on various continuous problems issued from theory and practice. The proposed method achieves a higher performance than previous algorithms, addressing all problems with up to one million variables in less than one minute on a modern computer.Comment: Working Paper -- MIT, 23 page

    A Noninterior Path following Algorithm for Solving a Class of Multiobjective Programming Problems

    Get PDF
    Multiobjective programming problems have been widely applied to various engineering areas which include optimal design of an automotive engine, economics, and military strategies. In this paper, we propose a noninterior path following algorithm to solve a class of multiobjective programming problems. Under suitable conditions, a smooth path will be proven to exist. This can give a constructive proof of the existence of solutions and lead to an implementable globally convergent algorithm. Several numerical examples are given to illustrate the results of this paper

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kĻ)\mathcal{O}(1/k^\rho) for some Ļāˆˆ(0,1]\rho\in(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.Comment: 27 pages, 6 figure

    Equilibrate Parametrization: Optimal Metric Selection with Provable One-iteration Convergence for l1 l_1 -minimization

    Full text link
    Incorporating a non-Euclidean variable metric to first-order algorithms is known to bring enhancement. However, due to the lack of an optimal choice, such an enhancement appears significantly underestimated. In this work, we establish a metric selection principle via optimizing a convergence rate upper-bound. For general l1-minimization, we propose an optimal metric choice with closed-form expressions guaranteed. Equipping such a variable metric, we prove that the optimal solution to the l1 problem will be obtained via a one-time proximal operator evaluation. Our technique applies to a large class of fixed-point algorithms, particularly the ADMM, which is popular, general, and requires minimum assumptions. The key to our success is the employment of an unscaled/equilibrate upper-bound. We show that there exists an implicit scaling that poses a hidden obstacle to optimizing parameters. This turns out to be a fundamental issue induced by the classical parametrization. We note that the conventional way always associates the parameter to the range of a function/operator. This turns out not a natural way, causing certain symmetry losses, definition inconsistencies, and unnecessary complications, with the well-known Moreau identity being the best example. We propose equilibrate parametrization, which associates the parameter to the domain of a function, and to both the domain and range of a monotone operator. A series of powerful results are obtained owing to the new parametrization. Quite remarkably, the preconditioning technique can be shown as equivalent to the metric selection issue

    OSQP: An Operator Splitting Solver for Quadratic Programs

    Full text link
    We present a general-purpose solver for convex quadratic programs based on the alternating direction method of multipliers, employing a novel operator splitting technique that requires the solution of a quasi-definite linear system with the same coefficient matrix at almost every iteration. Our algorithm is very robust, placing no requirements on the problem data such as positive definiteness of the objective function or linear independence of the constraint functions. It can be configured to be division-free once an initial matrix factorization is carried out, making it suitable for real-time applications in embedded systems. In addition, our technique is the first operator splitting method for quadratic programs able to reliably detect primal and dual infeasible problems from the algorithm iterates. The method also supports factorization caching and warm starting, making it particularly efficient when solving parametrized problems arising in finance, control, and machine learning. Our open-source C implementation OSQP has a small footprint, is library-free, and has been extensively tested on many problem instances from a wide variety of application areas. It is typically ten times faster than competing interior-point methods, and sometimes much more when factorization caching or warm start is used. OSQP has already shown a large impact with tens of thousands of users both in academia and in large corporations
    • ā€¦
    corecore