124 research outputs found

    Online Combinatorial Linear Optimization via a Frank-Wolfe-based Metarounding Algorithm

    Full text link
    Metarounding is an approach to convert an approximation algorithm for linear optimization over some combinatorial classes to an online linear optimization algorithm for the same class. We propose a new metarounding algorithm under a natural assumption that a relax-based approximation algorithm exists for the combinatorial class. Our algorithm is much more efficient in both theoretical and practical aspects

    Boosting as Frank-Wolfe

    Full text link
    Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the 1\ell_1-norm regularization. LPBoost rapidly converges to an ϵ\epsilon-approximate solution in practice, but it is known to take Ω(m)\Omega(m) iterations in the worst case, where mm is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an ϵ\epsilon-approximate solution in O(1ϵ2lnmν)O(\frac{1}{\epsilon^2} \ln \frac{m}{\nu}) iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost

    Decision Diagrams for Solving a Job Scheduling Problem Under Precedence Constraints

    Get PDF
    We consider a job scheduling problem under precedence constraints, a classical problem for a single processor and multiple jobs to be done. The goal is, given processing time of n fixed jobs and precedence constraints over jobs, to find a permutation of n jobs that minimizes the total flow time, i.e., the sum of total wait time and processing times of all jobs, while satisfying the precedence constraints. The problem is an integer program and is NP-hard in general. We propose a decision diagram pi-MDD, for solving the scheduling problem exactly. Our diagram is suitable for solving linear optimization over permutations with precedence constraints. We show the effectiveness of our approach on the experiments on large scale artificial scheduling problems

    Boosting-based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks

    Full text link
    Understanding the characteristics of neural networks is important but difficult due to their complex structures and behaviors. Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest. This approach is promising since rich results of verification techniques for circuits and other Boolean expressions can be readily applied. The bottleneck is the time complexity of the transformation. More precisely, (i) each neuron of the network, i.e., a linear threshold function, is converted to a Binary Decision Diagram (BDD), and (ii) they are further combined into some final form, such as Boolean circuits. For a linear threshold function with nn variables, an existing method takes O(n2n2)O(n2^{\frac{n}{2}}) time to construct an ordered BDD of size O(2n2)O(2^{\frac{n}{2}}) consistent with some variable ordering. However, it is non-trivial to choose a variable ordering producing a small BDD among n!n! candidates. We propose a method to convert a linear threshold function to a specific form of a BDD based on the boosting approach in the machine learning literature. Our method takes O(2npoly(1/ρ))O(2^n \text{poly}(1/\rho)) time and outputs BDD of size O(n2ρ4ln1ρ)O(\frac{n^2}{\rho^4}\ln{\frac{1}{\rho}}), where ρ\rho is the margin of some consistent linear threshold function. Our method does not need to search for good variable orderings and produces a smaller expression when the margin of the linear threshold function is large. More precisely, our method is based on our new boosting algorithm, which is of independent interest. We also propose a method to combine them into the final Boolean expression representing the neural network

    Pure exploration in multi-armed bandits with low rank structure using oblivious sampler

    Full text link
    In this paper, we consider the low rank structure of the reward sequence of the pure exploration problems. Firstly, we propose the separated setting in pure exploration problem, where the exploration strategy cannot receive the feedback of its explorations. Due to this separation, it requires that the exploration strategy to sample the arms obliviously. By involving the kernel information of the reward vectors, we provide efficient algorithms for both time-varying and fixed cases with regret bound O(d(lnN)/n)O(d\sqrt{(\ln N)/n}). Then, we show the lower bound to the pure exploration in multi-armed bandits with low rank sequence. There is an O(lnN)O(\sqrt{\ln N}) gap between our upper bound and the lower bound.Comment: 15 page

    Extended Formulations via Decision Diagrams

    Full text link
    We propose a general algorithm of constructing an extended formulation for any given set of linear constraints with integer coefficients. Our algorithm consists of two phases: first construct a decision diagram (V,E)(V,E) that somehow represents a given m×nm \times n constraint matrix, and then build an equivalent set of E|E| linear constraints over n+Vn+|V| variables. That is, the size of the resultant extended formulation depends not explicitly on the number mm of the original constraints, but on its decision diagram representation. Therefore, we may significantly reduce the computation time for optimization problems with integer constraint matrices by solving them under the extended formulations, especially when we obtain concise decision diagram representations for the matrices. We can apply our method to 11-norm regularized hard margin optimization over the binary instance space {0,1}n\{0,1\}^n, which can be formulated as a linear programming problem with mm constraints with {1,0,1}\{-1,0,1\}-valued coefficients over nn variables, where mm is the size of the given sample. Furthermore, introducing slack variables over the edges of the decision diagram, we establish a variant formulation of soft margin optimization. We demonstrate the effectiveness of our extended formulations for integer programming and the 11-norm regularized soft margin optimization tasks over synthetic and real datasets
    corecore