3 research outputs found

    Extended Formulations via Decision Diagrams

    Full text link
    We propose a general algorithm of constructing an extended formulation for any given set of linear constraints with integer coefficients. Our algorithm consists of two phases: first construct a decision diagram (V,E)(V,E) that somehow represents a given m×nm \times n constraint matrix, and then build an equivalent set of E|E| linear constraints over n+Vn+|V| variables. That is, the size of the resultant extended formulation depends not explicitly on the number mm of the original constraints, but on its decision diagram representation. Therefore, we may significantly reduce the computation time for optimization problems with integer constraint matrices by solving them under the extended formulations, especially when we obtain concise decision diagram representations for the matrices. We can apply our method to 11-norm regularized hard margin optimization over the binary instance space {0,1}n\{0,1\}^n, which can be formulated as a linear programming problem with mm constraints with {1,0,1}\{-1,0,1\}-valued coefficients over nn variables, where mm is the size of the given sample. Furthermore, introducing slack variables over the edges of the decision diagram, we establish a variant formulation of soft margin optimization. We demonstrate the effectiveness of our extended formulations for integer programming and the 11-norm regularized soft margin optimization tasks over synthetic and real datasets

    Boosting over non-deterministic ZDDs

    No full text
    We propose a new approach to large-scale machine learning, learning over compressed data: First compress the training data somehow and then employ various machine learning algorithms on the compressed data, with the hope that the computation time is significantly reduced when the training data is well compressed. As the first step, we consider a variant of the Zero-Suppressed Binary Decision Diagram (ZDD) as the data structure for representing the training data, which is a generalization of the ZDD by incorporating non-determinism. For the learning algorithm to be employed, we consider boosting algorithm called AdaBoost∗ and its precursor AdaBoost. In this work, we give efficient implementations of the boosting algorithms whose running times (per iteration) are linear in the size of the given ZDD

    Boosting over non-deterministic ZDDs

    No full text
    We propose a new approach to large-scale machine learning, learning over compressed data: First compress the training data somehow and then em-ploy various machine learning algorithms on the compressed data, with the hope that the computation time is signi_cantly reduced when the training data is well compressed. As a _rst step toward this approach, we consider a variant of the Zero-Suppressed Binary Decision Diagram (ZDD) as the data structure for representing the training data, which is a generalization of the ZDD by incorporating non-determinism. For the learning algorithm to be employed, we consider a boosting algorithm called AdaBoost_ and its precursor AdaBoost. In this paper, we give efficient implementations of the boosting algorithms whose running times (per iteration) are linear in the size of the given ZDD
    corecore