87 research outputs found

    Enumerative Branching with Less Repetition

    Get PDF
    We can compactly represent large sets of solutions for problems with discrete decision variables by using decision diagrams. With them, we can efficiently identify optimal solutions for different objective functions. In fact, a decision diagram naturally arises from the branch-and-bound tree that we could use to enumerate these solutions if we merge nodes from which the same solutions are obtained on the remaining variables. However, we would like to avoid the repetitive work of finding the same solutions from branching on different nodes at the same level of that tree. Instead, we would like to explore just one of these equivalent nodes and then infer that the same solutions would have been found if we explored other nodes. In this work, we show how to identify such equivalences‚ÄĒand thus directly construct a reduced decision diagram‚ÄĒin integer programs where the left-hand sides of all constraints consist of additively separable functions. First, we extend an existing result regarding problems with a single linear constraint and integer coefficients. Second, we show necessary conditions with which we can isolate a single explored node as the only candidate to be equivalent to each unexplored node in problems with multiple constraints. Third, we present a sufficient condition that confirms if such a pair of nodes is indeed equivalent, and we demonstrate how to induce that condition through preprocessing. Finally, we report computational results on integer linear programming problems from the MIPLIB benchmark. Our approach often constructs smaller decision diagrams faster and with less branching

    Reformulating the Disjunctive Cut Generating Linear Program

    Get PDF
    Lift-and-project cuts can be obtained by defining an elegant optimization problem over the space of valid inequalities, the cut generating linear program (CGLP). A CGLP has two main ingredients: (i) an objective function, which invariably maximizes the violation with respect to a fractional solution x to be separated; and (ii) a normalization constraint, which limits the scale in which cuts are represented. One would expect that CGLP optima entail the best cuts, but the normalization may distort how cuts are compared, and the cutting plane may not be a supporting hyperplane with respect to the closure of valid inequalities from the CGLP. This work proposes the reverse polar CGLP (RP-CGLP), which switches the roles conventionally played by objective and normalization: violation with respect to x is fixed to a positive constant, whereas we minimize the slack for a point p that cannot be separated by the valid inequalities. Cuts from RP-CGLP optima define supporting hyperplanes of the immediate closure. When that closure is full-dimensional, the face defined by the cut lays on facets first intersected by a ray from x to p, all of which corresponding to cutting planes from RP-CGLP optima if p is an interior point. In fact, these are the cuts minimizing a ratio between the slack for p and the violation for x. We show how to derive such cuts directly from the simplex tableau in the case of split disjunctions and report experiments on adapting the CglLandP cut generator library for the RP-CGLP formulation

    Seamless Benchmarking of Mathematical Optimization Problems and Metadata Extensions

    Get PDF
    Public libraries of problems such as Mixed Integer Programming Library (MIPLIB) are fundamental to creating a common benchmark for measuring algorithmic advances across mathematical optimization solvers. They also often provide metadata on problem structure, hardness with respect to state-of-the-art solvers, and solutions with the best objective function value on record. In this short paper, we discuss some ways in which such metadata can be leveraged to create a seamless testing experience. In particular, we present MIPLIBing: a Python library that automatically downloads queried subsets from the current versions of MIPLIB, MINLPLib, and QPLIB, provides a centralized local cache across projects, and tracks the best solution values and bounds on record for each problem. While inspired by similar use cases from other areas, we reflect on the specific needs of mathematical optimization and discuss opportunities to extend benchmark sets to facilitate experimentation with different model structures

    Estudo de correlação em mercados usando redes complexas

    Get PDF
    Monografia (gradua√ß√£o)‚ÄĒUniversidade de Bras√≠lia, Faculdade de Economia, Administra√ß√£o e Contabilidade e Ci√™ncia da Informa√ß√£o e Documenta√ß√£o, Departamento de Economia, 2012.Neste trabalho investigamos as propriedades topol√≥gicas das redes banc√°rias e constru√≠mos a √°rvore geradora m√≠nima (MST), que √© baseada no conceito de ultrametricidade, utilizando a matriz de correla√ß√Ķes para um grande n√ļmero devari√°veis banc√°rias. Os resultados emp√≠ricos sugerem que os bancos privados e estrangeiros tendem a formar grupos dentro da rede e, al√©m disso, os bancos com diferentes tamanhos s√£o tamb√©m fortemente ligados entre si e tendem a formar aglomerados. Estes resultados s√£o robustos ao uso de diferentes vari√°veis para a constru√ß√£o da rede, como a lucratividade dos bancos, ativos, patrim√īnio, receitas e empr√©stimos. Utilizamos tamb√©m a metodologia da MST e sua √°rvore taxon√īmica para investigar as propriedades topol√≥gicas da estrutura a prazo da rede das taxas de juros brasileira, usando a matriz de correla√ß√£o entre as taxas de juros de diferentes vencimentos. N√≥s mostramos que a taxa de juros de curto prazo √© a mais importante dentro da rede das taxas de juros, que est√° em conson√Ęncia com a hip√≥tese de expectativas de taxas de juros e, al√©m disso, descobrimos que a rede taxas de juros brasileira forma aglomerados por prazo de vencimento

    Scaling Up Exact Neural Network Compression by ReLU Stability

    Get PDF
    We can compress a rectifier network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable. However, current approaches to determine the stability of neurons with Rectified Linear Unit (ReLU) activations require solving or finding a good approximation to multiple discrete optimization problems. In this work, we introduce an algorithm based on solving a single optimization problem to identify all stable neurons. Our approach is on median 183 times faster than the state-of-art method on CIFAR-10, which allows us to explore exact compression on deeper (5 x 100) and wider (2 x 800) networks within minutes. For classifiers trained under an amount of L1 regularization that does not worsen accuracy, we can remove up to 56% of the connections on the CIFAR-10 dataset. The code is available at the following link, https://github.com/yuxwind/ExactCompression

    When Deep Learning Meets Polyhedral Theory: A Survey

    Full text link
    In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing. Meanwhile, the structure of neural networks converged back to simpler representations based on piecewise constant and piecewise linear functions such as the Rectified Linear Unit (ReLU), which became the most commonly used type of activation function in neural networks. That made certain types of network structure \unicode{x2014}such as the typical fully-connected feedforward neural network\unicode{x2014} amenable to analysis through polyhedral theory and to the application of methodologies such as Linear Programming (LP) and Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this paper, we survey the main topics emerging from this fast-paced area of work, which bring a fresh perspective to understanding neural networks in more detail as well as to applying linear optimization techniques to train, verify, and reduce the size of such networks
    • ‚Ķ
    corecore