5,281 research outputs found

    A Framework for Globally Optimizing Mixed-Integer Signomial Programs

    Get PDF
    Mixed-integer signomial optimization problems have broad applicability in engineering. Extending the Global Mixed-Integer Quadratic Optimizer, GloMIQO (Misener, Floudas in J. Glob. Optim., 2012. doi:10.1007/s10898-012-9874-7), this manuscript documents a computational framework for deterministically addressing mixed-integer signomial optimization problems to ε-global optimality. This framework generalizes the GloMIQO strategies of (1) reformulating user input, (2) detecting special mathematical structure, and (3) globally optimizing the mixed-integer nonconvex program. Novel contributions of this paper include: flattening an expression tree towards term-based data structures; introducing additional nonconvex terms to interlink expressions; integrating a dynamic implementation of the reformulation-linearization technique into the branch-and-cut tree; designing term-based underestimators that specialize relaxation strategies according to variable bounds in the current tree node. Computational results are presented along with comparison of the computational framework to several state-of-the-art solvers. © 2013 Springer Science+Business Media New York

    Using Functional Programming to recognize Named Structure in an Optimization Problem: Application to Pooling

    Get PDF
    Branch-and-cut optimization solvers typically apply generic algorithms, e.g., cutting planes or primal heuristics, to expedite performance for many mathematical optimization problems. But solver software receives an input optimization problem as vectors of equations and constraints containing no structural information. This article proposes automatically detecting named special structure using the pattern matching features of functional programming. Specifically, we deduce the industrially-relevant nonconvex nonlinear Pooling Problem within a mixed-integer nonlinear optimization problem and show that we can uncover pooling structure in optimization problems which are not pooling problems. Previous work has shown that preprocessing heuristics can find network structures; we show that we can additionally detect nonlinear pooling patterns. Finding named structures allows us to apply, to generic optimization problems, cutting planes or primal heuristics developed for the named structure. To demonstrate the recognition algorithm, we use the recognized structure to apply primal heuristics to a test set of standard pooling problems

    Generalized Nonconvex Nonsmooth Low-Rank Minimization

    Full text link
    As surrogate functions of L0L_0-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery. However, different from convex optimization, solving the nonconvex low-rank minimization problem is much more challenging than the nonconvex sparse minimization problem. We observe that all the existing nonconvex penalty functions are concave and monotonically increasing on [0,∞)[0,\infty). Thus their gradients are decreasing functions. Based on this property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the weight vector as the gradient of the concave penalty function, the WSVT problem has a closed form solution. In theory, we prove that IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthetic data and real images demonstrate that IRNN enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.Comment: IEEE International Conference on Computer Vision and Pattern Recognition, 201

    A local branching heuristic for MINLPs

    Full text link
    Local branching is an improvement heuristic, developed within the context of branch-and-bound algorithms for MILPs, which has proved to be very effective in practice. For the binary case, it is based on defining a neighbourhood of the current incumbent solution by allowing only a few binary variables to flip their value, through the addition of a local branching constraint. The neighbourhood is then explored with a branch-and-bound solver. We propose a local branching scheme for (nonconvex) MINLPs which is based on iteratively solving MILPs and NLPs. Preliminary computational experiments show that this approach is able to improve the incumbent solution on the majority of the test instances, requiring only a short CPU time. Moreover, we provide algorithmic ideas for a primal heuristic whose purpose is to find a first feasible solution, based on the same scheme
    • …
    corecore