33,843 research outputs found

    The ?-t-Net Problem

    Get PDF
    We study a natural generalization of the classical ?-net problem (Haussler - Welzl 1987), which we call the ?-t-net problem: Given a hypergraph on n vertices and parameters t and ? ? t/n, find a minimum-sized family S of t-element subsets of vertices such that each hyperedge of size at least ? n contains a set in S. When t=1, this corresponds to the ?-net problem. We prove that any sufficiently large hypergraph with VC-dimension d admits an ?-t-net of size O((1+log t)d/? log 1/?). For some families of geometrically-defined hypergraphs (such as the dual hypergraph of regions with linear union complexity), we prove the existence of O(1/?)-sized ?-t-nets. We also present an explicit construction of ?-t-nets (including ?-nets) for hypergraphs with bounded VC-dimension. In comparison to previous constructions for the special case of ?-nets (i.e., for t=1), it does not rely on advanced derandomization techniques. To this end we introduce a variant of the notion of VC-dimension which is of independent interest

    On interference among moving sensors and related problems

    Full text link
    We show that for any set of nn points moving along "simple" trajectories (i.e., each coordinate is described with a polynomial of bounded degree) in d\Re^d and any parameter 2kn2 \le k \le n, one can select a fixed non-empty subset of the points of size O(klogk)O(k \log k), such that the Voronoi diagram of this subset is "balanced" at any given time (i.e., it contains O(n/k)O(n/k) points per cell). We also show that the bound O(klogk)O(k \log k) is near optimal even for the one dimensional case in which points move linearly in time. As applications, we show that one can assign communication radii to the sensors of a network of nn moving sensors so that at any given time their interference is O(nlogn)O(\sqrt{n\log n}). We also show some results in kinetic approximate range counting and kinetic discrepancy. In order to obtain these results, we extend well-known results from ε\varepsilon-net theory to kinetic environments

    A Statistical Learning Theory Approach for Uncertain Linear and Bilinear Matrix Inequalities

    Full text link
    In this paper, we consider the problem of minimizing a linear functional subject to uncertain linear and bilinear matrix inequalities, which depend in a possibly nonlinear way on a vector of uncertain parameters. Motivated by recent results in statistical learning theory, we show that probabilistic guaranteed solutions can be obtained by means of randomized algorithms. In particular, we show that the Vapnik-Chervonenkis dimension (VC-dimension) of the two problems is finite, and we compute upper bounds on it. In turn, these bounds allow us to derive explicitly the sample complexity of these problems. Using these bounds, in the second part of the paper, we derive a sequential scheme, based on a sequence of optimization and validation steps. The algorithm is on the same lines of recent schemes proposed for similar problems, but improves both in terms of complexity and generality. The effectiveness of this approach is shown using a linear model of a robot manipulator subject to uncertain parameters.Comment: 19 pages, 2 figures, Accepted for Publication in Automatic
    corecore