1,891 research outputs found

    On a generalization of iterated and randomized rounding

    Get PDF
    We give a general method for rounding linear programs that combines the commonly used iterated rounding and randomized rounding techniques. In particular, we show that whenever iterated rounding can be applied to a problem with some slack, there is a randomized procedure that returns an integral solution that satisfies the guarantees of iterated rounding and also has concentration properties. We use this to give new results for several classic problems where iterated rounding has been useful

    Improved Analysis of Deterministic Load-Balancing Schemes

    Full text link
    We consider the problem of deterministic load balancing of tokens in the discrete model. A set of nn processors is connected into a dd-regular undirected network. In every time step, each processor exchanges some of its tokens with each of its neighbors in the network. The goal is to minimize the discrepancy between the number of tokens on the most-loaded and the least-loaded processor as quickly as possible. Rabani et al. (1998) present a general technique for the analysis of a wide class of discrete load balancing algorithms. Their approach is to characterize the deviation between the actual loads of a discrete balancing algorithm with the distribution generated by a related Markov chain. The Markov chain can also be regarded as the underlying model of a continuous diffusion algorithm. Rabani et al. showed that after time T=O(log(Kn)/μ)T = O(\log (Kn)/\mu), any algorithm of their class achieves a discrepancy of O(dlogn/μ)O(d\log n/\mu), where μ\mu is the spectral gap of the transition matrix of the graph, and KK is the initial load discrepancy in the system. In this work we identify some natural additional conditions on deterministic balancing algorithms, resulting in a class of algorithms reaching a smaller discrepancy. This class contains well-known algorithms, eg., the Rotor-Router. Specifically, we introduce the notion of cumulatively fair load-balancing algorithms where in any interval of consecutive time steps, the total number of tokens sent out over an edge by a node is the same (up to constants) for all adjacent edges. We prove that algorithms which are cumulatively fair and where every node retains a sufficient part of its load in each step, achieve a discrepancy of O(min{dlogn/μ,dn})O(\min\{d\sqrt{\log n/\mu},d\sqrt{n}\}) in time O(T)O(T). We also show that in general neither of these assumptions may be omitted without increasing discrepancy. We then show by a combinatorial potential reduction argument that any cumulatively fair scheme satisfying some additional assumptions achieves a discrepancy of O(d)O(d) almost as quickly as the continuous diffusion process. This positive result applies to some of the simplest and most natural discrete load balancing schemes.Comment: minor corrections; updated literature overvie

    The parallel approximability of a subclass of quadratic programming

    Get PDF
    In this paper we deal with the parallel approximability of a special class of Quadratic Programming (QP), called Smooth Positive Quadratic Programming. This subclass of QP is obtained by imposing restrictions on the coefficients of the QP instance. The Smoothness condition restricts the magnitudes of the coefficients while the positiveness requires that all the coefficients be non-negative. Interestingly, even with these restrictions several combinatorial problems can be modeled by Smooth QP. We show NC Approximation Schemes for the instances of Smooth Positive QP. This is done by reducing the instance of QP to an instance of Positive Linear Programming, finding in NC an approximate fractional solution to the obtained program, and then rounding the fractional solution to an integer approximate solution for the original problem. Then we show how to extend the result for positive instances of bounded degree to Smooth Integer Programming problems. Finally, we formulate several important combinatorial problems as Positive Quadratic Programs (or Positive Integer Programs) in packing/covering form and show that the techniques presented can be used to obtain NC Approximation Schemes for "dense" instances of such problems.Peer ReviewedPostprint (published version

    Tight Bounds for Randomized Load Balancing on Arbitrary Network Topologies

    Full text link
    We consider the problem of balancing load items (tokens) in networks. Starting with an arbitrary load distribution, we allow nodes to exchange tokens with their neighbors in each round. The goal is to achieve a distribution where all nodes have nearly the same number of tokens. For the continuous case where tokens are arbitrarily divisible, most load balancing schemes correspond to Markov chains, whose convergence is fairly well-understood in terms of their spectral gap. However, in many applications, load items cannot be divided arbitrarily, and we need to deal with the discrete case where the load is composed of indivisible tokens. This discretization entails a non-linear behavior due to its rounding errors, which makes this analysis much harder than in the continuous case. We investigate several randomized protocols for different communication models in the discrete case. As our main result, we prove that for any regular network in the matching model, all nodes have the same load up to an additive constant in (asymptotically) the same number of rounds as required in the continuous case. This generalizes and tightens the previous best result, which only holds for expander graphs, and demonstrates that there is almost no difference between the discrete and continuous cases. Our results also provide a positive answer to the question of how well discrete load balancing can be approximated by (continuous) Markov chains, which has been posed by many researchers.Comment: 74 pages, 4 figure

    Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions

    Full text link
    We consider the problem of optimizing an approximately convex function over a bounded convex set in Rn\mathbb{R}^n using only function evaluations. The problem is reduced to sampling from an \emph{approximately} log-concave distribution using the Hit-and-Run method, which is shown to have the same O\mathcal{O}^* complexity as sampling from log-concave distributions. In addition to extend the analysis for log-concave distributions to approximate log-concave distributions, the implementation of the 1-dimensional sampler of the Hit-and-Run walk requires new methods and analysis. The algorithm then is based on simulated annealing which does not relies on first order conditions which makes it essentially immune to local minima. We then apply the method to different motivating problems. In the context of zeroth order stochastic convex optimization, the proposed method produces an ϵ\epsilon-minimizer after O(n7.5ϵ2)\mathcal{O}^*(n^{7.5}\epsilon^{-2}) noisy function evaluations by inducing a O(ϵ/n)\mathcal{O}(\epsilon/n)-approximately log concave distribution. We also consider in detail the case when the "amount of non-convexity" decays towards the optimum of the function. Other applications of the method discussed in this work include private computation of empirical risk minimizers, two-stage stochastic programming, and approximate dynamic programming for online learning.Comment: 27 page
    corecore