187,159 research outputs found

    (Total) Vector Domination for Graphs with Bounded Branchwidth

    Full text link
    Given a graph G=(V,E)G=(V,E) of order nn and an nn-dimensional non-negative vector d=(d(1),d(2),,d(n))d=(d(1),d(2),\ldots,d(n)), called demand vector, the vector domination (resp., total vector domination) is the problem of finding a minimum SVS\subseteq V such that every vertex vv in VSV\setminus S (resp., in VV) has at least d(v)d(v) neighbors in SS. The (total) vector domination is a generalization of many dominating set type problems, e.g., the dominating set problem, the kk-tuple dominating set problem (this kk is different from the solution size), and so on, and its approximability and inapproximability have been studied under this general framework. In this paper, we show that a (total) vector domination of graphs with bounded branchwidth can be solved in polynomial time. This implies that the problem is polynomially solvable also for graphs with bounded treewidth. Consequently, the (total) vector domination problem for a planar graph is subexponential fixed-parameter tractable with respectto kk, where kk is the size of solution.Comment: 16 page

    The number of generalized balanced lines

    Full text link
    Let SS be a set of rr red points and b=r+2db=r+2d blue points in general position in the plane, with d0d\geq 0. A line \ell determined by them is said to be balanced if in each open half-plane bounded by \ell the difference between the number of red points and blue points is dd. We show that every set SS as above has at least rr balanced lines. The main techniques in the proof are rotations and a generalization, sliding rotations, introduced here.Comment: 6 pages, 3 figures, several typos fixed, reference adde

    The mixing time of the switch Markov chains: a unified approach

    Get PDF
    Since 1997 a considerable effort has been spent to study the mixing time of switch Markov chains on the realizations of graphic degree sequences of simple graphs. Several results were proved on rapidly mixing Markov chains on unconstrained, bipartite, and directed sequences, using different mechanisms. The aim of this paper is to unify these approaches. We will illustrate the strength of the unified method by showing that on any PP-stable family of unconstrained/bipartite/directed degree sequences the switch Markov chain is rapidly mixing. This is a common generalization of every known result that shows the rapid mixing nature of the switch Markov chain on a region of degree sequences. Two applications of this general result will be presented. One is an almost uniform sampler for power-law degree sequences with exponent γ>1+3\gamma>1+\sqrt{3}. The other one shows that the switch Markov chain on the degree sequence of an Erd\H{o}s-R\'enyi random graph G(n,p)G(n,p) is asymptotically almost surely rapidly mixing if pp is bounded away from 0 and 1 by at least 5lognn1\frac{5\log n}{n-1}.Comment: Clarification

    Generalization Bounds via Information Density and Conditional Information Density

    Get PDF
    We present a general approach, based on an exponential inequality, to derive bounds on the generalization error of randomized learning algorithms. Using this approach, we provide bounds on the average generalization error as well as bounds on its tail probability, for both the PAC-Bayesian and single-draw scenarios. Specifically, for the case of subgaussian loss functions, we obtain novel bounds that depend on the information density between the training data and the output hypothesis. When suitably weakened, these bounds recover many of the information-theoretic available bounds in the literature. We also extend the proposed exponential-inequality approach to the setting recently introduced by Steinke and Zakynthinou (2020), where the learning algorithm depends on a randomly selected subset of the available training data. For this setup, we present bounds for bounded loss functions in terms of the conditional information density between the output hypothesis and the random variable determining the subset choice, given all training data. Through our approach, we recover the average generalization bound presented by Steinke and Zakynthinou (2020) and extend it to the PAC-Bayesian and single-draw scenarios. For the single-draw scenario, we also obtain novel bounds in terms of the conditional α\alpha-mutual information and the conditional maximal leakage.Comment: Published in Journal on Selected Areas in Information Theory (JSAIT). Important note: the proof of the data-dependent bounds provided in the paper contains an error, which is rectified in the following document: https://gdurisi.github.io/files/2021/jsait-correction.pd

    The Limits of Post-Selection Generalization

    Full text link
    While statistics and machine learning offers numerous methods for ensuring generalization, these methods often fail in the presence of adaptivity---the common practice in which the choice of analysis depends on previous interactions with the same dataset. A recent line of work has introduced powerful, general purpose algorithms that ensure post hoc generalization (also called robust or post-selection generalization), which says that, given the output of the algorithm, it is hard to find any statistic for which the data differs significantly from the population it came from. In this work we show several limitations on the power of algorithms satisfying post hoc generalization. First, we show a tight lower bound on the error of any algorithm that satisfies post hoc generalization and answers adaptively chosen statistical queries, showing a strong barrier to progress in post selection data analysis. Second, we show that post hoc generalization is not closed under composition, despite many examples of such algorithms exhibiting strong composition properties
    corecore