21 research outputs found

    From Cutting Planes Algorithms to Compression Schemes and Active Learning

    Get PDF
    Cutting-plane methods are well-studied localization(and optimization) algorithms. We show that they provide a natural framework to perform machinelearning ---and not just to solve optimization problems posed by machinelearning--- in addition to their intended optimization use. In particular, theyallow one to learn sparse classifiers and provide good compression schemes.Moreover, we show that very little effort is required to turn them intoeffective active learning methods. This last property provides a generic way todesign a whole family of active learning algorithms from existing passivemethods. We present numerical simulations testifying of the relevance ofcutting-plane methods for passive and active learning tasks.Comment: IJCNN 2015, Jul 2015, Killarney, Ireland. 2015, \<http://www.ijcnn.org/\&g

    Parallel Submodular Function Minimization

    Full text link
    We consider the parallel complexity of submodular function minimization (SFM). We provide a pair of methods which obtain two new query versus depth trade-offs a submodular function defined on subsets of nn elements that has integer values between M-M and MM. The first method has depth 22 and query complexity nO(M)n^{O(M)} and the second method has depth O~(n1/3M2/3)\widetilde{O}(n^{1/3} M^{2/3}) and query complexity O(poly(n,M))O(\mathrm{poly}(n, M)). Despite a line of work on improved parallel lower bounds for SFM, prior to our work the only known algorithms for parallel SFM either followed from more general methods for sequential SFM or highly-parallel minimization of convex 2\ell_2-Lipschitz functions. Interestingly, to obtain our second result we provide the first highly-parallel algorithm for minimizing \ell_\infty-Lipschitz function over the hypercube which obtains near-optimal depth for obtaining constant accuracy

    The ellipsoid method redux

    Full text link
    We reconsider the ellipsoid method for linear inequalities. Using the ellipsoid representation of Burrell and Todd, we show the method can be viewed as coordinate descent on the volume of an enclosing ellipsoid, or on a potential function, or on both. The method can be enhanced by improving the lower bounds generated and by allowing the weights on inequalities to be decreased as well as increased, while still guaranteeing a decrease in volume. Three different initialization schemes are described, and preliminary computational results given. Despite the improvements discussed, these are not encouraging.Comment: 29 pages, 4 table

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    Convex Minimization with Integer Minima in O~(n4)\widetilde O(n^4) Time

    Full text link
    Given a convex function ff on Rn\mathbb{R}^n with an integer minimizer, we show how to find an exact minimizer of ff using O(n2logn)O(n^2 \log n) calls to a separation oracle and O(n4logn)O(n^4 \log n) time. The previous best polynomial time algorithm for this problem given in [Jiang, SODA 2021, JACM 2022] achieves O(n2loglogn/logn)O(n^2\log\log n/\log n) oracle complexity. However, the overall runtime of Jiang's algorithm is at least Ω~(n8)\widetilde{\Omega}(n^8), due to expensive sub-routines such as the Lenstra-Lenstra-Lov\'asz (LLL) algorithm [Lenstra, Lenstra, Lov\'asz, Math. Ann. 1982] and random walk based cutting plane method [Bertsimas, Vempala, JACM 2004]. Our significant speedup is obtained by a nontrivial combination of a faster version of the LLL algorithm due to [Neumaier, Stehl\'e, ISSAC 2016] that gives similar guarantees, the volumetric center cutting plane method (CPM) by [Vaidya, FOCS 1989] and its fast implementation given in [Jiang, Lee, Song, Wong, STOC 2020]. For the special case of submodular function minimization (SFM), our result implies a strongly polynomial time algorithm for this problem using O(n3logn)O(n^3 \log n) calls to an evaluation oracle and O(n4logn)O(n^4 \log n) additional arithmetic operations. Both the oracle complexity and the number of arithmetic operations of our more general algorithm are better than the previous best-known runtime algorithms for this specific problem given in [Lee, Sidford, Wong, FOCS 2015] and [Dadush, V\'egh, Zambelli, SODA 2018, MOR 2021].Comment: SODA 202

    Solutions of systems of nonlinear equations Final report

    Get PDF
    Method and computer program for solving arbitrary simultaneous system of nonlinear algebraic and transcendental equation
    corecore