89 research outputs found

    Error-Tolerant Non-Adaptive Learning of a Hidden Hypergraph

    Get PDF
    We consider the problem of learning the hypergraph using edge-detecting queries. In this model, the learner is allowed to query whether a set of vertices includes an edge from a hidden hypergraph. Except a few, all previous algorithms assume that a query\u27s result is always correct. In this paper we study the problem of learning a hypergraph where alpha -fraction of the queries are incorrect. The main contribution of this paper is generalizing the well-known structure CFF (Cover Free Family) to be Dense (we will call it DCFF - Dense Cover Free Family) while presenting three different constructions for DCFF. Later, we use these constructions wisely to give a polynomial time non-adaptive learning algorithm for a hypergraph problem with at most alpha-fracion incorrect queries. The hypergraph problem is also known as both monotone DNF learning problem, and complexes group testing problem

    Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

    Full text link
    We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC). We also give a new bound on the LRC for norm regularized as well as strongly convex hypothesis classes, which applies not only to MTL but also to the standard i.i.d. setting. Combining both results, one can now easily derive fast-rate bounds on the excess risk for many prominent MTL methods, including---as we demonstrate---Schatten-norm, group-norm, and graph-regularized MTL. The derived bounds reflect a relationship akeen to a conservation law of asymptotic convergence rates. This very relationship allows for trading off slower rates w.r.t. the number of tasks for faster rates with respect to the number of available samples per task, when compared to the rates obtained via a traditional, global Rademacher analysis.Comment: In this version, some arguments and results (of the previous version) have been corrected, or modifie

    Differentially private sampling from distributions

    Full text link
    We initiate an investigation of private sampling from distributions. Given a dataset with n independent observations from an unknown distribution P, a sampling algorithm must output a single observation from a distribution that is close in total variation distance to P while satisfying differential privacy. Sampling abstracts the goal of generating small amounts of realistic-looking data. We provide tight upper and lower bounds for the dataset size needed for this task for three natural families of distributions: arbitrary distributions on {1,…,k}, arbitrary product distributions on {0,1}d, and product distributions on on {0,1}d with bias in each coordinate bounded away from 0 and 1. We demonstrate that, in some parameter regimes, private sampling requires asymptotically fewer observations than learning a description of P nonprivately; in other regimes, however, private sampling proves to be as difficult as private learning. Notably, for some classes of distributions, the overhead in the number of observations needed for private learning compared to non-private learning is completely captured by the number of observations needed for private sampling.https://proceedings.neurips.cc/paper/2021/hash/f2b5e92f61b6de923b063588ee6e7c48-Abstract.htm
    • …
    corecore