1,640 research outputs found
Efficient protocols for distributed classification and optimization
pre-printA recent paper [1] proposes a general model for distributed learning that bounds the communication required for learning classifiers with e error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses O(d2 log1=e) words of communication to classify distributed data in arbitrary dimension d, e- optimally. This extends to classification over k nodes with O(kd2 log1=e) words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we show how to solve fixed-dimensional and high-dimensional linear programming with small communication in a distributed setting where constraints may be distributed across nodes. Our techniques make use of a novel connection from multipass streaming, as well as adapting the multiplicative-weight-update framework more generally to a distributed setting
Algorithms to Approximate Column-Sparse Packing Problems
Column-sparse packing problems arise in several contexts in both
deterministic and stochastic discrete optimization. We present two unifying
ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain
improved approximation algorithms for some well-known families of such
problems. As three main examples, we attain the integrality gap, up to
lower-order terms, for known LP relaxations for k-column sparse packing integer
programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set
packing (Bansal et al., Algorithmica, 2012), and go "half the remaining
distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and
Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM
Transactions of Algorithm
On the discrepancy of random low degree set systems
Motivated by the celebrated Beck-Fiala conjecture, we consider the random
setting where there are elements and sets and each element lies in
randomly chosen sets. In this setting, Ezra and Lovett showed an discrepancy bound in the regime when and an bound
when .
In this paper, we give a tight bound for the entire range of
and , under a mild assumption that . The
result is based on two steps. First, applying the partial coloring method to
the case when and using the properties of the random set
system we show that the overall discrepancy incurred is at most .
Second, we reduce the general case to that of using LP
duality and a careful counting argument
- …