265 research outputs found

    A new Lenstra-type Algorithm for Quasiconvex Polynomial Integer Minimization with Complexity 2^O(n log n)

    Full text link
    We study the integer minimization of a quasiconvex polynomial with quasiconvex polynomial constraints. We propose a new algorithm that is an improvement upon the best known algorithm due to Heinz (Journal of Complexity, 2005). This improvement is achieved by applying a new modern Lenstra-type algorithm, finding optimal ellipsoid roundings, and considering sparse encodings of polynomials. For the bounded case, our algorithm attains a time-complexity of s (r l M d)^{O(1)} 2^{2n log_2(n) + O(n)} when M is a bound on the number of monomials in each polynomial and r is the binary encoding length of a bound on the feasible region. In the general case, s l^{O(1)} d^{O(n)} 2^{2n log_2(n) +O(n)}. In each we assume d>= 2 is a bound on the total degree of the polynomials and l bounds the maximum binary encoding size of the input.Comment: 28 pages, 10 figure

    Minimizing Cubic and Homogeneous Polynomials over Integers in the Plane

    Full text link
    We complete the complexity classification by degree of minimizing a polynomial over the integer points in a polyhedron in R2\mathbb{R}^2. Previous work shows that optimizing a quadratic polynomial over the integer points in a polyhedral region in R2\mathbb{R}^2 can be done in polynomial time, while optimizing a quartic polynomial in the same type of region is NP-hard. We close the gap by showing that this problem can be solved in polynomial time for cubic polynomials. Furthermore, we show that the problem of minimizing a homogeneous polynomial of any fixed degree over the integer points in a bounded polyhedron in R2\mathbb{R}^2 is solvable in polynomial time. We show that this holds for polynomials that can be translated into homogeneous polynomials, even when the translation vector is unknown. We demonstrate that such problems in the unbounded case can have smallest optimal solutions of exponential size in the size of the input, thus requiring a compact representation of solutions for a general polynomial time algorithm for the unbounded case

    Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

    Full text link
    The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problem's subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call "zero-convexity". This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio

    Design of First-Order Optimization Algorithms via Sum-of-Squares Programming

    Full text link
    In this paper, we propose a framework based on sum-of-squares programming to design iterative first-order optimization algorithms for smooth and strongly convex problems. Our starting point is to develop a polynomial matrix inequality as a sufficient condition for exponential convergence of the algorithm. The entries of this matrix are polynomial functions of the unknown parameters (exponential decay rate, stepsize, momentum coefficient, etc.). We then formulate a polynomial optimization, in which the objective is to optimize the exponential decay rate over the parameters of the algorithm. Finally, we use sum-of-squares programming as a tractable relaxation of the proposed polynomial optimization problem. We illustrate the utility of the proposed framework by designing a first-order algorithm that shares the same structure as Nesterov's accelerated gradient method

    Nonlinear Optimization over a Weighted Independence System

    Get PDF
    We consider the problem of optimizing a nonlinear objective function over a weighted independence system presented by a linear-optimization oracle. We provide a polynomial-time algorithm that determines an r-best solution for nonlinear functions of the total weight of an independent set, where r is a constant that depends on certain Frobenius numbers of the individual weights and is independent of the size of the ground set. In contrast, we show that finding an optimal (0-best) solution requires exponential time even in a very special case of the problem

    Quasiconvex Programming

    Full text link
    We define quasiconvex programming, a form of generalized linear programming in which one seeks the point minimizing the pointwise maximum of a collection of quasiconvex functions. We survey algorithms for solving quasiconvex programs either numerically or via generalizations of the dual simplex method from linear programming, and describe varied applications of this geometric optimization technique in meshing, scientific computation, information visualization, automated algorithm analysis, and robust statistics.Comment: 33 pages, 14 figure
    corecore