101 research outputs found

    Online Combinatorial Linear Optimization via a Frank-Wolfe-based Metarounding Algorithm

    Full text link
    Metarounding is an approach to convert an approximation algorithm for linear optimization over some combinatorial classes to an online linear optimization algorithm for the same class. We propose a new metarounding algorithm under a natural assumption that a relax-based approximation algorithm exists for the combinatorial class. Our algorithm is much more efficient in both theoretical and practical aspects

    Fractures in Duchenne Muscular Dystrophy: chiefly about their causes

    Get PDF
    Among 148 Duchenne muscular dystrophy children, nineteen cases (twenty-six fractures) were associated with bone fractures involving twelve femurs, nine humeri, four tibiae and one metatarsal bone. Seven out of twenty-six cases had experienced fractures twice. The causes of fractures were falls in fifteen cases, collision against surrounding objects in five, body position change in four and unknown in two. Femoral fractures were dominant during the wheelchair-bound phase, while, humeral fractures were dominant during the ambulatory phase. As these children lack sitting and standing balance as well as normal muscular power, we have to be careful of falls to prevent bone fractures when they are in a sitting or standing posture. Most of these fractures seem to be prevented if careful attention was paid during rehabilitation exercise, transfer and body position change etc

    Boosting as Frank-Wolfe

    Full text link
    Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the 1\ell_1-norm regularization. LPBoost rapidly converges to an ϵ\epsilon-approximate solution in practice, but it is known to take Ω(m)\Omega(m) iterations in the worst case, where mm is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an ϵ\epsilon-approximate solution in O(1ϵ2lnmν)O(\frac{1}{\epsilon^2} \ln \frac{m}{\nu}) iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost

    Boosting-based Construction of BDDs for Linear Threshold Functions and Its Application to Verification of Neural Networks

    Full text link
    Understanding the characteristics of neural networks is important but difficult due to their complex structures and behaviors. Some previous work proposes to transform neural networks into equivalent Boolean expressions and apply verification techniques for characteristics of interest. This approach is promising since rich results of verification techniques for circuits and other Boolean expressions can be readily applied. The bottleneck is the time complexity of the transformation. More precisely, (i) each neuron of the network, i.e., a linear threshold function, is converted to a Binary Decision Diagram (BDD), and (ii) they are further combined into some final form, such as Boolean circuits. For a linear threshold function with nn variables, an existing method takes O(n2n2)O(n2^{\frac{n}{2}}) time to construct an ordered BDD of size O(2n2)O(2^{\frac{n}{2}}) consistent with some variable ordering. However, it is non-trivial to choose a variable ordering producing a small BDD among n!n! candidates. We propose a method to convert a linear threshold function to a specific form of a BDD based on the boosting approach in the machine learning literature. Our method takes O(2npoly(1/ρ))O(2^n \text{poly}(1/\rho)) time and outputs BDD of size O(n2ρ4ln1ρ)O(\frac{n^2}{\rho^4}\ln{\frac{1}{\rho}}), where ρ\rho is the margin of some consistent linear threshold function. Our method does not need to search for good variable orderings and produces a smaller expression when the margin of the linear threshold function is large. More precisely, our method is based on our new boosting algorithm, which is of independent interest. We also propose a method to combine them into the final Boolean expression representing the neural network

    Decision Diagrams for Solving a Job Scheduling Problem Under Precedence Constraints

    Get PDF
    We consider a job scheduling problem under precedence constraints, a classical problem for a single processor and multiple jobs to be done. The goal is, given processing time of n fixed jobs and precedence constraints over jobs, to find a permutation of n jobs that minimizes the total flow time, i.e., the sum of total wait time and processing times of all jobs, while satisfying the precedence constraints. The problem is an integer program and is NP-hard in general. We propose a decision diagram pi-MDD, for solving the scheduling problem exactly. Our diagram is suitable for solving linear optimization over permutations with precedence constraints. We show the effectiveness of our approach on the experiments on large scale artificial scheduling problems
    corecore