13 research outputs found

    On the Complexity of Optimization over the Standard Simplex

    Get PDF
    We review complexity results for minimizing polynomials over the standard simplex and unit hypercube.In addition, we show that there exists a polynomial time approximation scheme (PTAS) for minimizing Lipschitz continuous functions and functions with uniformly bounded Hessians over the standard simplex.This extends an earlier result by De Klerk, Laurent and Parrilo [A PTAS for the minimization of polynomials of fixed degree over the simplex, Theoretical Computer Science, to appear.]global optimization;standard simplex;PTAS;multivariate Bernstein approximation;semidefinite programming

    On the Complexity of Optimization over the Standard Simplex

    Get PDF
    We review complexity results for minimizing polynomials over the standard simplex and unit hypercube.In addition, we show that there exists a polynomial time approximation scheme (PTAS) for minimizing Lipschitz continuous functions and functions with uniformly bounded Hessians over the standard simplex.This extends an earlier result by De Klerk, Laurent and Parrilo [A PTAS for the minimization of polynomials of fixed degree over the simplex, Theoretical Computer Science, to appear.]

    A refined error analysis for fixed-degree polynomial optimization over the simplex

    Full text link
    We consider the problem of minimizing a fixed-degree polynomial over the standard simplex. This problem is well known to be NP-hard, since it contains the maximum stable set problem in combinatorial optimization as a special case. In this paper, we revisit a known upper bound obtained by taking the minimum value on a regular grid, and a known lower bound based on P\'olya's representation theorem. More precisely, we consider the difference between these two bounds and we provide upper bounds for this difference in terms of the range of function values. Our results refine the known upper bounds in the quadratic and cubic cases, and they asymptotically refine the known upper bound in the general case.Comment: 13 page

    An error analysis for polynomial optimization over the simplex based on the multivariate hypergeometric distribution

    Get PDF
    We study the minimization of fixed-degree polynomials over the simplex. This problem is well-known to be NP-hard, as it contains the maximum stable set problem in graph theory as a special case. In this paper, we consider a rational approximation by taking the minimum over the regular grid, which consists of rational points with denominator rr (for given rr). We show that the associated convergence rate is O(1/r2)O(1/r^2) for quadratic polynomials. For general polynomials, if there exists a rational global minimizer over the simplex, we show that the convergence rate is also of the order O(1/r2)O(1/r^2). Our results answer a question posed by De Klerk et al. (2013) and improves on previously known O(1/r)O(1/r) bounds in the quadratic case.Comment: 17 page

    An error analysis for polynomial optimization over the simplex based on the multivariate hypergeometric distribution

    Get PDF
    We study the minimization of fixed-degree polynomials over the simplex. This problem is well-known to be NP-hard, as it contains the maximum stable set problem in graph theory as a special case. In this paper, we consider a rational approximation by taking the minimum over the regular grid, which consists of rational points with denominator r (for given r). We show that the associated convergence rate is O(1/r^2 ) for quadratic polynomials. For general polynomials, if there exists a rational global minimizer over the simplex, we show that the convergence rate is also of the order O(1/r^2 ). Our results answer a question posed by De Klerk et al. [9] and improves on previously known O(1/r) bounds in the quadratic case

    Convergence analysis for Lasserre's measure-based hierarchy of upper bounds for polynomial optimization

    Get PDF
    We consider the problem of minimizing a continuous function f over a compact set K. We analyze a hierarchy of upper bounds proposed by Lasserre in [SIAM J. Optim. 21(3) (2011), pp. 864 − 885], obtained by searching for an optimal pr

    Combining Kernel Functions in Supervised Learning Models.

    Get PDF
    The research activity has mainly dealt with supervised Machine Learning algorithms, specifically within the context of kernel methods. A kernel function is a positive definite function mapping data from the original input space into a higher dimensional Hilbert space. Differently from classical linear methods, where problems are solved seeking for a linear function separating points in the input space, kernel methods all have in common the same basic focus: original input data is mapped onto a higher dimensional feature set where new coordinates are not computed, but only the inner product of input points. In this way, kernel methods make possible to deal with non-linearly separable set of data, making use of linear models in the feature space: all the Machine Learning methods using a linear function to determine the best fitting for a set of given data. Instead of employing one single kernel function, Multiple Kernel Learning algorithms tackle the problem of selecting kernel functions by using a combination of preset base kernels. Infinite Kernel Learning further extends such idea by exploiting a combination of possibly infinite base kernels. The research activity core idea is utilize a novel complex combination of kernel functions in already existing or modified supervised Machine Learning frameworks. Specifically, we considered two frameworks: Extreme Learning Machine, having the structure of classical feedforward Neural Networks but being characterized by hidden nodes variables randomly assigned at the beginning of the algorithm; Support Vector Machine, a class of linear algorithms based on the idea of separating data with a hyperplane having as wide a margin as possible. The first proposed model extends the classical Extreme Learning Machine formulation using a combination of possibly infinitely many base kernel, presenting a two-step algorithm. The second result uses a preexisting multi-task kernel function in a novel Support Vector Machine framework. Multi-task learning defines the Machine Learning problem of solving more than one task at the same time, with the main goal of taking into account the existing multi-task relationships. To be able to use the existing multi-task kernel function, we had to construct a new framework based on the classical Support Vector Machine one, taking care of every multi-task correlation factor
    corecore