17,588 research outputs found

    Primal and dual multi-objective linear programming algorithms for linear multiplicative programmes

    Get PDF
    Multiplicative programming problems (MPPs) are global optimization problems known to be NP-hard. In this paper, we employ algorithms developed to compute the entire set of nondominated points of multi-objective linear programmes (MOLPs) to solve linear MPPs. First, we improve our own objective space cut and bound algorithm for convex MPPs in the special case of linear MPPs by only solving one linear programme in each iteration, instead of two as the previous version indicates. We call this algorithm, which is based on Benson’s outer approximation algorithm for MOLPs, the primal objective space algorithm. Then, based on the dual variant of Benson’s algorithm, we propose a dual objective space algorithm for solving linear MPPs. The dual algorithm also requires solving only one linear programme in each iteration. We prove the correctness of the dual algorithm and use computational experiments comparing our algorithms to a recent global optimization algorithm for linear MPPs from the literature as well as two general global optimization solvers to demonstrate the superiority of the new algorithms in terms of computation time. Thus, we demonstrate that the use of multi-objective optimization techniques can be beneficial to solve difficult single objective global optimization problems

    Multiplicative Updates for Nonnegative Quadratic Programming

    Get PDF
    Many problems in neural computation and statistical learning involve optimizations with nonnegativity constraints. In this article, we study convex problems in quadratic programming where the optimization is confined to an axis-aligned region in the nonnegative orthant. For these problems, we derive multiplicative updates that improve the value of the objective function at each iteration and converge monotonically to the global minimum. The updates have a simple closed form and do not involve any heuristics or free parameters that must be tuned to ensure convergence. Despite their simplicity, they differ strikingly in form from other multiplicative updates used in machine learning.We provide complete proofs of convergence for these updates and describe their application to problems in signal processing and pattern recognition

    Computing Optimal Experimental Designs via Interior Point Method

    Full text link
    In this paper, we study optimal experimental design problems with a broad class of smooth convex optimality criteria, including the classical A-, D- and p th mean criterion. In particular, we propose an interior point (IP) method for them and establish its global convergence. Furthermore, by exploiting the structure of the Hessian matrix of the aforementioned optimality criteria, we derive an explicit formula for computing its rank. Using this result, we then show that the Newton direction arising in the IP method can be computed efficiently via Sherman-Morrison-Woodbury formula when the size of the moment matrix is small relative to the sample size. Finally, we compare our IP method with the widely used multiplicative algorithm introduced by Silvey et al. [29]. The computational results show that the IP method generally outperforms the multiplicative algorithm both in speed and solution quality
    • …
    corecore