387 research outputs found

    Algorithmic Chernoff-Hoeffding Inequalities in Integer Programming

    Get PDF
    Proofs of classical Chernoff-Hoeffding bounds have been used to obtain polynomial-time implementations of Spencer's derandomization method of conditional probabilities on usual finite machine models: given m events whose complements are large deviations corresponding to weighted sums of n mutually independent Bernoulli trials, Raghavan's lattice approximation algorithm constructs for 0-1 weights and integer deviation terms in O(mn)-time a point for which all events hold. For rational weighted sums of Bernoulli trials the lattice approximation algorithm or Spencer's hyperbolic cosine algorithm are deterministic procedures, but a polynomial-time implementation was not known. We resolve this problem with an O(mn^2log frac{mn}{epsilon})-time algorithm, whenever the probability that all events hold is at least epsilon > 0. Since such algorithms simulate the proof of the underlying large deviation inequality in a constructive way, we call it the algorithmic version of the inequality. Applications to general packing integer programs and resource constrained scheduling result in tight and polynomial-time approximations algorithms

    Algorithms for Constructing Overlay Networks For Live Streaming

    Full text link
    We present a polynomial time approximation algorithm for constructing an overlay multicast network for streaming live media events over the Internet. The class of overlay networks constructed by our algorithm include networks used by Akamai Technologies to deliver live media events to a global audience with high fidelity. We construct networks consisting of three stages of nodes. The nodes in the first stage are the entry points that act as sources for the live streams. Each source forwards each of its streams to one or more nodes in the second stage that are called reflectors. A reflector can split an incoming stream into multiple identical outgoing streams, which are then sent on to nodes in the third and final stage that act as sinks and are located in edge networks near end-users. As the packets in a stream travel from one stage to the next, some of them may be lost. A sink combines the packets from multiple instances of the same stream (by reordering packets and discarding duplicates) to form a single instance of the stream with minimal loss. Our primary contribution is an algorithm that constructs an overlay network that provably satisfies capacity and reliability constraints to within a constant factor of optimal, and minimizes cost to within a logarithmic factor of optimal. Further in the common case where only the transmission costs are minimized, we show that our algorithm produces a solution that has cost within a factor of 2 of optimal. We also implement our algorithm and evaluate it on realistic traces derived from Akamai's live streaming network. Our empirical results show that our algorithm can be used to efficiently construct large-scale overlay networks in practice with near-optimal cost

    Derandomizing Concentration Inequalities with dependencies and their combinatorial applications

    Get PDF
    Both in combinatorics and design and analysis of randomized algorithms for combinatorial optimization problems, we often use the famous bounded differences inequality by C. McDiarmid (1989), which is based on the martingale inequality by K. Azuma (1967), to show positive probability of success. In the case of sum of independent random variables, the inequalities of Chernoff (1952) and Hoeffding (1964) can be used and can be efficiently derandomized, i.e. we can construct the required event in deterministic, polynomial time (Srivastav and Stangier 1996). With such an algorithm one can construct the sought combinatorial structure or design an efficient deterministic algorithm from the probabilistic existentce result or the randomized algorithm. The derandomization of C. McDiarmid's bounded differences inequality was an open problem. The main result in Chapter 3 is an efficient derandomization of the bounded differences inequality, with the time required to compute the conditional expectation of the objective function being part of the complexity. The following chapters 4 through 7 demonstrate the generality and power of the derandomization framework developed in Chapter 3. In Chapter 5, we derandomize the Maker's random strategy in the Maker-Breaker subgraph game given by Bednarska and Luczak (2000), which is fundamental for the field, and analyzed with the concentration inequality of Janson, Luczak and Rucinski. But since we use the bounded differences inequality, it is necessary to give a new proof of the existence of subgraphs in G(n,M)-random graphs (Chapter 4). In Chapter 6, we derandomize the two-stage randomized algorithm for the set-multicover problem by El Ouali, Munstermann and Srivastav (2014). In Chapter 7, we show that the algorithm of Bansal, Caprara and Sviridenko (2009) for the multidimensional bin packing problem can be elegantly derandomized with our derandomization framework of bounded differences inequality, while the authors use a potential function based approach, leading to a rather complex analysis. In Chapter 8, we analyze the constrained hypergraph coloring problem given in Ahuja and Srivastav (2002), which generalizes both the property B problem for the non-monochromatic 2-coloring of hypergraphs and the multidimensional bin packing problem using the bounded differences inequality instead of the Lovasz local lemma. We also derandomize the algorithm using our framework. In Chapter 9, we turn to the generalization of the well-known concentration inequality of Hoeffding (1964) by Janson (1994), to sums of random variables, that are not independent, but are partially dependent, or in other words, are independent in certain groups. Assuming the same dependency structure as in Janson (1994), we generalize the well-known concentration inequality of Alon and Spencer (1991). In Chapter 10, we derandomize the inequality of Alon and Spencer. The derandomization of our generalized Alon-Spencer inequality under partial dependencies remains an interesting, open problem

    Family-Personalized Dietary Planning with Temporal Dynamics

    Full text link
    Poor diet and nutrition in the United States has immense financial and health costs, and development of new tools for diet planning could help families better balance their financial and temporal constraints with the quality of their diet and meals. This paper formulates a novel model for dietary planning that incorporates two types of temporal constraints (i.e., dynamics on the perishability of raw ingredients over time, and constraints on the time required to prepare meals) by explicitly incorporating the relationship between raw ingredients and selected food recipes. Our formulation is a diet planning model with integer-valued decision variables, and so we study the problem of designing approximation algorithms (i.e, algorithms with polynomial-time computation and guarantees on the quality of the computed solution) for our dietary model. We develop a deterministic approximation algorithm that is based on a deterministic variant of randomized rounding, and then evaluate our deterministic approximation algorithm with numerical experiments of dietary planning using a database of about 2000 food recipes and 150 raw ingredients

    Chromatic PAC-Bayes Bounds for Non-IID Data: Applications to Ranking and Stationary β\beta-Mixing Processes

    Full text link
    Pac-Bayes bounds are among the most accurate generalization bounds for classifiers learned from independently and identically distributed (IID) data, and it is particularly so for margin classifiers: there have been recent contributions showing how practical these bounds can be either to perform model selection (Ambroladze et al., 2007) or even to directly guide the learning of linear classifiers (Germain et al., 2009). However, there are many practical situations where the training data show some dependencies and where the traditional IID assumption does not hold. Stating generalization bounds for such frameworks is therefore of the utmost interest, both from theoretical and practical standpoints. In this work, we propose the first - to the best of our knowledge - Pac-Bayes generalization bounds for classifiers trained on data exhibiting interdependencies. The approach undertaken to establish our results is based on the decomposition of a so-called dependency graph that encodes the dependencies within the data, in sets of independent data, thanks to graph fractional covers. Our bounds are very general, since being able to find an upper bound on the fractional chromatic number of the dependency graph is sufficient to get new Pac-Bayes bounds for specific settings. We show how our results can be used to derive bounds for ranking statistics (such as Auc) and classifiers trained on data distributed according to a stationary {\ss}-mixing process. In the way, we show how our approach seemlessly allows us to deal with U-processes. As a side note, we also provide a Pac-Bayes generalization bound for classifiers learned on data from stationary φ\varphi-mixing distributions.Comment: Long version of the AISTATS 09 paper: http://jmlr.csail.mit.edu/proceedings/papers/v5/ralaivola09a/ralaivola09a.pd

    Dependent randomized rounding for clustering and partition systems with knapsack constraints

    Full text link
    Clustering problems are fundamental to unsupervised learning. There is an increased emphasis on fairness in machine learning and AI; one representative notion of fairness is that no single demographic group should be over-represented among the cluster-centers. This, and much more general clustering problems, can be formulated with "knapsack" and "partition" constraints. We develop new randomized algorithms targeting such problems, and study two in particular: multi-knapsack median and multi-knapsack center. Our rounding algorithms give new approximation and pseudo-approximation algorithms for these problems. One key technical tool, which may be of independent interest, is a new tail bound analogous to Feige (2006) for sums of random variables with unbounded variances. Such bounds are very useful in inferring properties of large networks using few samples
    • …
    corecore