12 research outputs found

    When Deep Learning Meets Polyhedral Theory: A Survey

    Full text link
    In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing. Meanwhile, the structure of neural networks converged back to simpler representations based on piecewise constant and piecewise linear functions such as the Rectified Linear Unit (ReLU), which became the most commonly used type of activation function in neural networks. That made certain types of network structure \unicode{x2014}such as the typical fully-connected feedforward neural network\unicode{x2014} amenable to analysis through polyhedral theory and to the application of methodologies such as Linear Programming (LP) and Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this paper, we survey the main topics emerging from this fast-paced area of work, which bring a fresh perspective to understanding neural networks in more detail as well as to applying linear optimization techniques to train, verify, and reduce the size of such networks

    Beating the SDP bound for the floor layout problem: A simple combinatorial idea

    Get PDF
    For many mixed-integer programming (MIP) problems, high-quality dual bounds can be obtained either through advanced formulation techniques coupled with a state-of-the-art MIP solver, or through semi-definite programming (SDP) relaxation hierarchies. In this paper, we introduce an alternative bounding approach that exploits the ‘combinatorial implosion’ effect by solving portions of the original problem and aggregating this information to obtain a global dual bound. We apply this technique to the one-dimensional and two-dimensional floor layout problems and compare it with the bounds generated by both state-of-the-art MIP solvers and by SDP relaxations. Specifically, we prove that the bounds obtained through the proposed technique are at least as good as those obtained through SDP relaxations, and present computational results that these bounds can be significantly stronger and easier to compute than these alternative strategies, particularly for very difficult problem instances.United States. National Science Foundation. Graduate Research Fellowship Program (Grant 1122374)United States. National Science Foundation. Graduate Research Fellowship Program (Grant CMMI-1351619

    The Convex Relaxation Barrier, Revisited: Tightened Single-Neuron Relaxations for Neural Network Verification

    Full text link
    We improve the effectiveness of propagation- and linear-optimization-based neural network verification algorithms with a new tightened convex relaxation for ReLU neurons. Unlike previous single-neuron relaxations which focus only on the univariate input space of the ReLU, our method considers the multivariate input space of the affine pre-activation function preceding the ReLU. Using results from submodularity and convex geometry, we derive an explicit description of the tightest possible convex relaxation when this multivariate input is over a box domain. We show that our convex relaxation is significantly stronger than the commonly used univariate-input relaxation which has been proposed as a natural convex relaxation barrier for verification. While our description of the relaxation may require an exponential number of inequalities, we show that they can be separated in linear time and hence can be efficiently incorporated into optimization algorithms on an as-needed basis. Based on this novel relaxation, we design two polynomial-time algorithms for neural network verification: a linear-programming-based algorithm that leverages the full power of our relaxation, and a fast propagation algorithm that generalizes existing approaches. In both cases, we show that for a modest increase in computational effort, our strengthened relaxation enables us to verify a significantly larger number of instances compared to similar algorithms
    corecore