1,640 research outputs found

    Efficient protocols for distributed classification and optimization

    Get PDF
    pre-printA recent paper [1] proposes a general model for distributed learning that bounds the communication required for learning classifiers with e error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d2 log1=e) words of communication to classify distributed data in arbitrary dimension d, e- optimally. This extends to classification over k nodes with O(kd2 log1=e) words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we show how to solve fixed-dimensional and high-dimensional linear programming with small communication in a distributed setting where constraints may be distributed across nodes. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting

    Algorithms to Approximate Column-Sparse Packing Problems

    Full text link
    Column-sparse packing problems arise in several contexts in both deterministic and stochastic discrete optimization. We present two unifying ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain improved approximation algorithms for some well-known families of such problems. As three main examples, we attain the integrality gap, up to lower-order terms, for known LP relaxations for k-column sparse packing integer programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set packing (Bansal et al., Algorithmica, 2012), and go "half the remaining distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM Transactions of Algorithm

    On the discrepancy of random low degree set systems

    Get PDF
    Motivated by the celebrated Beck-Fiala conjecture, we consider the random setting where there are nn elements and mm sets and each element lies in tt randomly chosen sets. In this setting, Ezra and Lovett showed an O((tlogt)1/2)O((t \log t)^{1/2}) discrepancy bound in the regime when nmn \leq m and an O(1)O(1) bound when nmtn \gg m^t. In this paper, we give a tight O(t)O(\sqrt{t}) bound for the entire range of nn and mm, under a mild assumption that t=Ω(loglogm)2t = \Omega (\log \log m)^2. The result is based on two steps. First, applying the partial coloring method to the case when n=mlogO(1)mn = m \log^{O(1)} m and using the properties of the random set system we show that the overall discrepancy incurred is at most O(t)O(\sqrt{t}). Second, we reduce the general case to that of nmlogO(1)mn \leq m \log^{O(1)}m using LP duality and a careful counting argument
    corecore