151,617 research outputs found

    Hoeffding's inequality for supermartingales

    Get PDF
    We give an extension of Hoeffding's inequality to the case of supermartingales with differences bounded from above. Our inequality strengthens or extends the inequalities of Freedman, Bernstein, Prohorov, Bennett and Nagaev.Comment: 20 pages, accepted; Stochastic Processes and their Applications (2012), Vol. 122, pages 3545-355

    Derandomizing Concentration Inequalities with dependencies and their combinatorial applications

    Get PDF
    Both in combinatorics and design and analysis of randomized algorithms for combinatorial optimization problems, we often use the famous bounded differences inequality by C. McDiarmid (1989), which is based on the martingale inequality by K. Azuma (1967), to show positive probability of success. In the case of sum of independent random variables, the inequalities of Chernoff (1952) and Hoeffding (1964) can be used and can be efficiently derandomized, i.e. we can construct the required event in deterministic, polynomial time (Srivastav and Stangier 1996). With such an algorithm one can construct the sought combinatorial structure or design an efficient deterministic algorithm from the probabilistic existentce result or the randomized algorithm. The derandomization of C. McDiarmid's bounded differences inequality was an open problem. The main result in Chapter 3 is an efficient derandomization of the bounded differences inequality, with the time required to compute the conditional expectation of the objective function being part of the complexity. The following chapters 4 through 7 demonstrate the generality and power of the derandomization framework developed in Chapter 3. In Chapter 5, we derandomize the Maker's random strategy in the Maker-Breaker subgraph game given by Bednarska and Luczak (2000), which is fundamental for the field, and analyzed with the concentration inequality of Janson, Luczak and Rucinski. But since we use the bounded differences inequality, it is necessary to give a new proof of the existence of subgraphs in G(n,M)-random graphs (Chapter 4). In Chapter 6, we derandomize the two-stage randomized algorithm for the set-multicover problem by El Ouali, Munstermann and Srivastav (2014). In Chapter 7, we show that the algorithm of Bansal, Caprara and Sviridenko (2009) for the multidimensional bin packing problem can be elegantly derandomized with our derandomization framework of bounded differences inequality, while the authors use a potential function based approach, leading to a rather complex analysis. In Chapter 8, we analyze the constrained hypergraph coloring problem given in Ahuja and Srivastav (2002), which generalizes both the property B problem for the non-monochromatic 2-coloring of hypergraphs and the multidimensional bin packing problem using the bounded differences inequality instead of the Lovasz local lemma. We also derandomize the algorithm using our framework. In Chapter 9, we turn to the generalization of the well-known concentration inequality of Hoeffding (1964) by Janson (1994), to sums of random variables, that are not independent, but are partially dependent, or in other words, are independent in certain groups. Assuming the same dependency structure as in Janson (1994), we generalize the well-known concentration inequality of Alon and Spencer (1991). In Chapter 10, we derandomize the inequality of Alon and Spencer. The derandomization of our generalized Alon-Spencer inequality under partial dependencies remains an interesting, open problem

    On the method of typical bounded differences

    Full text link
    Concentration inequalities are fundamental tools in probabilistic combinatorics and theoretical computer science for proving that random functions are near their means. Of particular importance is the case where f(X) is a function of independent random variables X=(X_1, ..., X_n). Here the well known bounded differences inequality (also called McDiarmid's or Hoeffding-Azuma inequality) establishes sharp concentration if the function f does not depend too much on any of the variables. One attractive feature is that it relies on a very simple Lipschitz condition (L): it suffices to show that |f(X)-f(X')| \leq c_k whenever X,X' differ only in X_k. While this is easy to check, the main disadvantage is that it considers worst-case changes c_k, which often makes the resulting bounds too weak to be useful. In this paper we prove a variant of the bounded differences inequality which can be used to establish concentration of functions f(X) where (i) the typical changes are small although (ii) the worst case changes might be very large. One key aspect of this inequality is that it relies on a simple condition that (a) is easy to check and (b) coincides with heuristic considerations why concentration should hold. Indeed, given an event \Gamma that holds with very high probability, we essentially relax the Lipschitz condition (L) to situations where \Gamma occurs. The point is that the resulting typical changes c_k are often much smaller than the worst case ones. To illustrate its application we consider the reverse H-free process, where H is 2-balanced. We prove that the final number of edges in this process is concentrated, and also determine its likely value up to constant factors. This answers a question of Bollob\'as and Erd\H{o}s.Comment: 25 page

    Logahedra: A new weakly relational domain

    Get PDF
    Weakly relational numeric domains express restricted classes of linear inequalities that strike a balance between what can be described and what can be efficiently computed. Popular weakly relational domains such as bounded differences and octagons have found application in model checking and abstract interpretation. This paper introduces logahedra, which are more expressiveness than octagons, but less expressive than arbitrary systems of two variable per inequality constraints. Logahedra allow coefficients of inequalities to be powers of two whilst retaining many of the desirable algorithmic properties of octagons

    On Hoeffding's inequalities

    Full text link
    In a celebrated work by Hoeffding [J. Amer. Statist. Assoc. 58 (1963) 13-30], several inequalities for tail probabilities of sums M_n=X_1+... +X_n of bounded independent random variables X_j were proved. These inequalities had a considerable impact on the development of probability and statistics, and remained unimproved until 1995 when Talagrand [Inst. Hautes Etudes Sci. Publ. Math. 81 (1995a) 73-205] inserted certain missing factors in the bounds of two theorems. By similar factors, a third theorem was refined by Pinelis [Progress in Probability 43 (1998) 257-314] and refined (and extended) by me. In this article, I introduce a new type of inequality. Namely, I show that P{M_n\geq x}\leq cP{S_n\geq x}, where c is an absolute constant and S_n=\epsilon_1+... +\epsilon_n is a sum of independent identically distributed Bernoulli random variables (a random variable is called Bernoulli if it assumes at most two values). The inequality holds for those x\in R where the survival function x\mapsto P{S_n\geq x} has a jump down. For the remaining x the inequality still holds provided that the function between the adjacent jump points is interpolated linearly or \log-linearly. If it is necessary, to estimate P{S_n\geq x} special bounds can be used for binomial probabilities. The results extend to martingales with bounded differences. It is apparent that Theorem 1.1 of this article is the most important.Comment: Published by the Institute of Mathematical Statistics (http://www.imstat.org) in the Annals of Probability (http://www.imstat.org/aop/) at http://dx.doi.org/10.1214/00911790400000036

    Deriving Matrix Concentration Inequalities from Kernel Couplings

    Get PDF
    This paper derives exponential tail bounds and polynomial moment inequalities for the spectral norm deviation of a random matrix from its mean value. The argument depends on a matrix extension of Stein's method of exchangeable pairs for concentration of measure, as introduced by Chatterjee. Recent work of Mackey et al. uses these techniques to analyze random matrices with additive structure, while the enhancements in this paper cover a wider class of matrix-valued random elements. In particular, these ideas lead to a bounded differences inequality that applies to random matrices constructed from weakly dependent random variables. The proofs require novel trace inequalities that may be of independent interest.Comment: 29 page

    An Analog of the Neumann Problem for the 11-Laplace Equation in the Metric Setting: Existence, Boundary Regularity, and Stability

    Full text link
    We study an inhomogeneous Neumann boundary value problem for functions of least gradient on bounded domains in metric spaces that are equipped with a doubling measure and support a Poincar\'e inequality. We show that solutions exist under certain regularity assumptions on the domain, but are generally nonunique. We also show that solutions can be taken to be differences of two characteristic functions, and that they are regular up to the boundary when the boundary is of positive mean curvature. By regular up to the boundary we mean that if the boundary data is 11 in a neighborhood of a point on the boundary of the domain, then the solution is 1-1 in the intersection of the domain with a possibly smaller neighborhood of that point. Finally, we consider the stability of solutions with respect to boundary data.Comment: 8 figure
    corecore