306 research outputs found

    Nearly Linear-Work Algorithms for Mixed Packing/Covering and Facility-Location Linear Programs

    Full text link
    We describe the first nearly linear-time approximation algorithms for explicitly given mixed packing/covering linear programs, and for (non-metric) fractional facility location. We also describe the first parallel algorithms requiring only near-linear total work and finishing in polylog time. The algorithms compute (1+ϵ)(1+\epsilon)-approximate solutions in time (and work) O(N/ϵ2)O^*(N/\epsilon^2), where NN is the number of non-zeros in the constraint matrix. For facility location, NN is the number of eligible client/facility pairs

    Sequential and Parallel Algorithms for Mixed Packing and Covering

    Full text link
    Mixed packing and covering problems are problems that can be formulated as linear programs using only non-negative coefficients. Examples include multicommodity network flow, the Held-Karp lower bound on TSP, fractional relaxations of set cover, bin-packing, knapsack, scheduling problems, minimum-weight triangulation, etc. This paper gives approximation algorithms for the general class of problems. The sequential algorithm is a simple greedy algorithm that can be implemented to find an epsilon-approximate solution in O(epsilon^-2 log m) linear-time iterations. The parallel algorithm does comparable work but finishes in polylogarithmic time. The results generalize previous work on pure packing and covering (the special case when the constraints are all "less-than" or all "greater-than") by Michael Luby and Noam Nisan (1993) and Naveen Garg and Jochen Konemann (1998)

    On the Number of Iterations for Dantzig-Wolfe Optimization and Packing-Covering Approximation Algorithms

    Get PDF
    We give a lower bound on the iteration complexity of a natural class of Lagrangean-relaxation algorithms for approximately solving packing/covering linear programs. We show that, given an input with mm random 0/1-constraints on nn variables, with high probability, any such algorithm requires Ω(ρlog(m)/ϵ2)\Omega(\rho \log(m)/\epsilon^2) iterations to compute a (1+ϵ)(1+\epsilon)-approximate solution, where ρ\rho is the width of the input. The bound is tight for a range of the parameters (m,n,ρ,ϵ)(m,n,\rho,\epsilon). The algorithms in the class include Dantzig-Wolfe decomposition, Benders' decomposition, Lagrangean relaxation as developed by Held and Karp [1971] for lower-bounding TSP, and many others (e.g. by Plotkin, Shmoys, and Tardos [1988] and Grigoriadis and Khachiyan [1996]). To prove the bound, we use a discrepancy argument to show an analogous lower bound on the support size of (1+ϵ)(1+\epsilon)-approximate mixed strategies for random two-player zero-sum 0/1-matrix games

    On-Line End-to-End Congestion Control

    Full text link
    Congestion control in the current Internet is accomplished mainly by TCP/IP. To understand the macroscopic network behavior that results from TCP/IP and similar end-to-end protocols, one main analytic technique is to show that the the protocol maximizes some global objective function of the network traffic. Here we analyze a particular end-to-end, MIMD (multiplicative-increase, multiplicative-decrease) protocol. We show that if all users of the network use the protocol, and all connections last for at least logarithmically many rounds, then the total weighted throughput (value of all packets received) is near the maximum possible. Our analysis includes round-trip-times, and (in contrast to most previous analyses) gives explicit convergence rates, allows connections to start and stop, and allows capacities to change.Comment: Proceedings IEEE Symp. Foundations of Computer Science, 200

    Fast and Deterministic Approximations for k-Cut

    Get PDF
    In an undirected graph, a k-cut is a set of edges whose removal breaks the graph into at least k connected components. The minimum weight k-cut can be computed in n^O(k) time, but when k is treated as part of the input, computing the minimum weight k-cut is NP-Hard [Goldschmidt and Hochbaum, 1994]. For poly(m,n,k)-time algorithms, the best possible approximation factor is essentially 2 under the small set expansion hypothesis [Manurangsi, 2017]. Saran and Vazirani [1995] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed via O(k) minimum cuts, which implies a O~(km) randomized running time via the nearly linear time randomized min-cut algorithm of Karger [2000]. Nagamochi and Kamidoi [2007] showed that a (2 - 2/k)-approximately minimum weight k-cut can be computed deterministically in O(mn + n^2 log n) time. These results prompt two basic questions. The first concerns the role of randomization. Is there a deterministic algorithm for 2-approximate k-cuts matching the randomized running time of O~(km)? The second question qualitatively compares minimum cut to 2-approximate minimum k-cut. Can 2-approximate k-cuts be computed as fast as the minimum cut - in O~(m) randomized time? We give a deterministic approximation algorithm that computes (2 + eps)-minimum k-cuts in O(m log^3 n / eps^2) time, via a (1 + eps)-approximation for an LP relaxation of k-cut
    corecore