12 research outputs found

    Depth-first simplicial partition for copositivity detection, with an application to MaxClique

    Get PDF
    Detection of copositivity plays an important role in combinatorial and quadratic optimization. Recently, an algorithm for copositivity detection by simplicial partition has been proposed. In this paper, we develop an improved depth-first simplicial partition algorithm which reduces memory requirements significantly and therefore enables copositivity checks of much larger matrices – of size up to a few thousands instead of a few hundreds. The algorithm has been investigated experimentally on a number of MaxClique problems as well as on generated random problems. We present numerical results showing that the algorithm is much faster than a recently published linear algebraic algorithm for copositivity detection based on traditional ideas – checking properties of principal sub-matrices. We also show that the algorithm works very well for solving MaxClique problems through copositivity checks

    Copositivity and constrained fractional quadratic programs

    Get PDF
    Abstract We provide Completely Positive and Copositive Optimization formulations for the Constrained Fractional Quadratic Problem (CFQP) and Standard Fractional Quadratic Problem (StFQP). Based on these formulations, Semidefinite Programming (SDP) relaxations are derived for finding good lower bounds to these fractional programs, which can be used in a global optimization branch-and-bound approach. Applications of the CFQP and StFQP, related with the correction of infeasible linear systems and eigenvalue complementarity problems are also discussed

    Proceedings of the XIII Global Optimization Workshop: GOW'16

    Get PDF
    [Excerpt] Preface: Past Global Optimization Workshop shave been held in Sopron (1985 and 1990), Szeged (WGO, 1995), Florence (GO’99, 1999), Hanmer Springs (Let’s GO, 2001), Santorini (Frontiers in GO, 2003), San José (Go’05, 2005), Mykonos (AGO’07, 2007), Skukuza (SAGO’08, 2008), Toulouse (TOGO’10, 2010), Natal (NAGO’12, 2012) and Málaga (MAGO’14, 2014) with the aim of stimulating discussion between senior and junior researchers on the topic of Global Optimization. In 2016, the XIII Global Optimization Workshop (GOW’16) takes place in Braga and is organized by three researchers from the University of Minho. Two of them belong to the Systems Engineering and Operational Research Group from the Algoritmi Research Centre and the other to the Statistics, Applied Probability and Operational Research Group from the Centre of Mathematics. The event received more than 50 submissions from 15 countries from Europe, South America and North America. We want to express our gratitude to the invited speaker Panos Pardalos for accepting the invitation and sharing his expertise, helping us to meet the workshop objectives. GOW’16 would not have been possible without the valuable contribution from the authors and the International Scientific Committee members. We thank you all. This proceedings book intends to present an overview of the topics that will be addressed in the workshop with the goal of contributing to interesting and fruitful discussions between the authors and participants. After the event, high quality papers can be submitted to a special issue of the Journal of Global Optimization dedicated to the workshop. [...

    Copositivity tests based on the linear complementarity problem

    Get PDF
    Copositivity tests are presented based on new necessary and suffcient conditions requiring the solution of linear complementarity problems (LCP). Methodologies involving Lemke's method, an enumerative algorithm and a linear mixed-integer programming formulation are proposed to solve the required LCPs. A new necessary condition for (strict) copositivity based on solving a Linear Program (LP) is also discussed, which can be used as a preprocessing step. The algorithms with these three different variants are thoroughly applied to test matrices from the literature and to max-clique instances with matrices up to dimension 496 x 496. We compare our procedures with three other copositivity tests from the literature as well as with a general global optimization solver. The numerical results are very promising and equally good and in many cases better than the results reported elsewhere.Mathematics subject classifications (MSC 2010): 15B48 Positive matrices and their generalizations; cones of matrices 90C33 Complementarity and equilibrium problems and variational inequalities (finite dimensions) 65F30 Other matrix algorithms 65K99 None of the above, but in this section 90C26 Nonconvex programming, global optimizatio

    On the Exhaustivity of Simplicial Partitioning

    Get PDF
    Abstract During the last 40 years, simplicial partitioning has shown itself to be highly useful, including in the field of Nonlinear Optimisation. In this article, we consider results on the exhaustivity of simplicial partitioning schemes. We consider conjectures on this exhaustivity which seem at first glance to be true (two of which have been stated as true in published articles). However, we will provide counter examples to these conjectures. We also provide a new simplicial partitioning scheme, which provides a lot of freedom, whilst guaranteeing exhaustivity. Mathematics Subject Classification: 65K99; 90C2

    Approximation Algorithms for Mixed Integer Non-Linear Optimization Problems

    Get PDF
    Mixed integer non-linear optimization (MINLO) problems are usually NP-hard. Although obtaining feasible solutions is relatively easy via heuristic or local search methods, it is still challenging to guarantee the quality (i.e., the gap to optimal value) of a given feasible solution even under mild assumptions in a tractable fashion. In this thesis, we propose efficient mixed integer linear programming based algorithms for finding feasible solutions and proving the quality of these solutions for three widely-applied MINLO problems. In Chapter 1, we study the sparse principal component analysis (SPCA) problem. SPCA is a dimensionality reduction tool in statistics. Comparing with the classical principal component analysis (PCA), the SPCA enhances the interpretability by incorporating an additional sparsity constraint in the feature weights (factor loadings). However, unlike PCA, solving the SPCA problem to optimality is NP-hard. Most conventional methods for SPCA are heuristics with no guarantees such as certificates of optimality on the solution-quality via associated \emph{dual bounds}. We present a convex integer programming (IP) framework to derive dual bounds based on the 1\ell_1-relaxation of SPCA. We show the theoretical worst-case guarantee of the dual bounds provided by the convex IP. Based on numerical results, we empirically illustrate that our convex IP framework outperforms existing SPCA methods in both accuracy and efficiency of finding dual bounds. Moreover, these dual bounds obtained in computations are significantly better than worst-case theoretical guarantees. Chapter 2 focuses on solving a non-trivial generalization of SPCA -- the (row) sparse principal component analysis (rsPCA) problem. Solving rsPCA is to find the top-rr leading principal components of a covariance matrix such that all these principal components share the same support set with cardinality at most kk. In this chapter, we propose: (a) a convex integer programming relaxation of rsPCA that gives upper (dual) bounds for rsPCA, and; (b) a new local search algorithm for finding primal feasible solutions for rsPCA. We also show that, in the worst-case, the dual bounds provided by the convex IP are within an affine function of the optimal value. We demonstrate our techniques applied to large-scale covariance matrices. In Chapter 3, we consider a fundamental training problem of finding the best-fitting ReLU concerning square-loss -- also called ``ReLU Regression'' in machine learning. We begin by proving the NP-hardness of the ReLU regression. We then present an approximation algorithm to solve the ReLU regression, whose running time is O(nk)\mathcal{O}(n^k) where nn is the number of samples, and kk is a predefined integral constant as an algorithm parameter. We analyze the performance of this algorithm under two regimes and show that: (1) given an arbitrary set of training samples, the algorithm guarantees an (n/k)(n/k)-approximation for the ReLU regression problem -- to the best of our knowledge, this is the first time that an algorithm guarantees an approximation ratio for arbitrary data scenario; thus, in the ideal case (i.e., when the training error is zero) the approximation algorithm achieves the globally optimal solution for the ReLU regression problem; and (2) given training sample with Gaussian noise, the same approximation algorithm achieves a much better asymptotic approximation ratio which is independent of the number of samples nn. Extensive numerical studies show that our approximation algorithm can perform better than the classical gradient descent algorithm in ReLU regression. Moreover, numerical results also imply that the proposed approximation algorithm could provide a good initialization for gradient descent and significantly improve the performance.Ph.D
    corecore