18 research outputs found

    Neural Network Verification as Piecewise Linear Optimization: Formulations for the Composition of Staircase Functions

    Full text link
    We present a technique for neural network verification using mixed-integer programming (MIP) formulations. We derive a \emph{strong formulation} for each neuron in a network using piecewise linear activation functions. Additionally, as in general, these formulations may require an exponential number of inequalities, we also derive a separation procedure that runs in super-linear time in the input dimension. We first introduce and develop our technique on the class of \emph{staircase} functions, which generalizes the ReLU, binarized, and quantized activation functions. We then use results for staircase activation functions to obtain a separation method for general piecewise linear activation functions. Empirically, using our strong formulation and separation technique, we can reduce the computational time in exact verification settings based on MIP and improve the false negative rate for inexact verifiers relying on the relaxation of the MIP formulation

    Strong mixed-integer formulations for the floor layout problem

    Get PDF
    The floor layout problem (FLP) tasks a designer with positioning a collection of rectangular boxes on a fixed floor in such a way that minimizes total communication costs between the components. While several mixed integer programming (MIP) formulations for this problem have been developed, it remains extremely challenging from a computational perspective. This work takes a systematic approach to constructing MIP formulations and valid inequalities for the FLP that unifies and recovers all known formulations for it. In addition, the approach yields new formulations that can provide a significant computational advantage and can solve previously unsolved instances. While the construction approach focuses on the FLP, it also exemplifies generic formulation techniques that should prove useful for broader classes of problems.United States. National Science Foundation. Graduate Research Fellowship Program (Grant 1122374)United States. National Science Foundation. Graduate Research Fellowship Program (Grant CMMI-1351619

    When Deep Learning Meets Polyhedral Theory: A Survey

    Full text link
    In the past decade, deep learning became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural networks in tasks such as computer vision and natural language processing. Meanwhile, the structure of neural networks converged back to simpler representations based on piecewise constant and piecewise linear functions such as the Rectified Linear Unit (ReLU), which became the most commonly used type of activation function in neural networks. That made certain types of network structure \unicode{x2014}such as the typical fully-connected feedforward neural network\unicode{x2014} amenable to analysis through polyhedral theory and to the application of methodologies such as Linear Programming (LP) and Mixed-Integer Linear Programming (MILP) for a variety of purposes. In this paper, we survey the main topics emerging from this fast-paced area of work, which bring a fresh perspective to understanding neural networks in more detail as well as to applying linear optimization techniques to train, verify, and reduce the size of such networks
    corecore