1,140 research outputs found

    Error-Tolerant Non-Adaptive Learning of a Hidden Hypergraph

    Get PDF
    We consider the problem of learning the hypergraph using edge-detecting queries. In this model, the learner is allowed to query whether a set of vertices includes an edge from a hidden hypergraph. Except a few, all previous algorithms assume that a query\u27s result is always correct. In this paper we study the problem of learning a hypergraph where alpha -fraction of the queries are incorrect. The main contribution of this paper is generalizing the well-known structure CFF (Cover Free Family) to be Dense (we will call it DCFF - Dense Cover Free Family) while presenting three different constructions for DCFF. Later, we use these constructions wisely to give a polynomial time non-adaptive learning algorithm for a hypergraph problem with at most alpha-fracion incorrect queries. The hypergraph problem is also known as both monotone DNF learning problem, and complexes group testing problem

    On Multistage Learning a Hidden Hypergraph

    Full text link
    Learning a hidden hypergraph is a natural generalization of the classical group testing problem that consists in detecting unknown hypergraph Hun=H(V,E)H_{un}=H(V,E) by carrying out edge-detecting tests. In the given paper we focus our attention only on a specific family F(t,s,)F(t,s,\ell) of localized hypergraphs for which the total number of vertices V=t|V| = t, the number of edges Es|E|\le s, sts\ll t, and the cardinality of any edge e|e|\le\ell, t\ell\ll t. Our goal is to identify all edges of HunF(t,s,)H_{un}\in F(t,s,\ell) by using the minimal number of tests. We develop an adaptive algorithm that matches the information theory bound, i.e., the total number of tests of the algorithm in the worst case is at most slog2t(1+o(1))s\ell\log_2 t(1+o(1)). We also discuss a probabilistic generalization of the problem.Comment: 5 pages, IEEE conferenc

    Optimal Query Complexity for Reconstructing Hypergraphs

    Get PDF
    In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let GG be a weighted hidden hypergraph of constant rank with n vertices and mm hyperedges. For any mm there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlognlogm) O(\frac{m\log n}{\log m}) additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008]. When the weights of the hypergraph are integers that are less than O(poly(nd/m))O(poly(n^d/m)) where dd is the rank of the hypergraph (and therefore for unweighted hypergraphs) there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlogndmlogm). O(\frac{m\log \frac{n^d}{m}}{\log m}). additive queries. Using the information theoretic bound the above query complexities are tight

    Learning and Testing Variable Partitions

    Get PDF
    Let FF be a multivariate function from a product set Σn\Sigma^n to an Abelian group GG. A kk-partition of FF with cost δ\delta is a partition of the set of variables V\mathbf{V} into kk non-empty subsets (X1,,Xk)(\mathbf{X}_1, \dots, \mathbf{X}_k) such that F(V)F(\mathbf{V}) is δ\delta-close to F1(X1)++Fk(Xk)F_1(\mathbf{X}_1)+\dots+F_k(\mathbf{X}_k) for some F1,,FkF_1, \dots, F_k with respect to a given error metric. We study algorithms for agnostically learning kk partitions and testing kk-partitionability over various groups and error metrics given query access to FF. In particular we show that 1.1. Given a function that has a kk-partition of cost δ\delta, a partition of cost O(kn2)(δ+ϵ)\mathcal{O}(k n^2)(\delta + \epsilon) can be learned in time O~(n2poly(1/ϵ))\tilde{\mathcal{O}}(n^2 \mathrm{poly} (1/\epsilon)) for any ϵ>0\epsilon > 0. In contrast, for k=2k = 2 and n=3n = 3 learning a partition of cost δ+ϵ\delta + \epsilon is NP-hard. 2.2. When FF is real-valued and the error metric is the 2-norm, a 2-partition of cost δ2+ϵ\sqrt{\delta^2 + \epsilon} can be learned in time O~(n5/ϵ2)\tilde{\mathcal{O}}(n^5/\epsilon^2). 3.3. When FF is Zq\mathbb{Z}_q-valued and the error metric is Hamming weight, kk-partitionability is testable with one-sided error and O(kn3/ϵ)\mathcal{O}(kn^3/\epsilon) non-adaptive queries. We also show that even two-sided testers require Ω(n)\Omega(n) queries when k=2k = 2. This work was motivated by reinforcement learning control tasks in which the set of control variables can be partitioned. The partitioning reduces the task into multiple lower-dimensional ones that are relatively easier to learn. Our second algorithm empirically increases the scores attained over previous heuristic partitioning methods applied in this context.Comment: Innovations in Theoretical Computer Science (ITCS) 202

    Spectral Detection on Sparse Hypergraphs

    Get PDF
    We consider the problem of the assignment of nodes into communities from a set of hyperedges, where every hyperedge is a noisy observation of the community assignment of the adjacent nodes. We focus in particular on the sparse regime where the number of edges is of the same order as the number of vertices. We propose a spectral method based on a generalization of the non-backtracking Hashimoto matrix into hypergraphs. We analyze its performance on a planted generative model and compare it with other spectral methods and with Bayesian belief propagation (which was conjectured to be asymptotically optimal for this model). We conclude that the proposed spectral method detects communities whenever belief propagation does, while having the important advantages to be simpler, entirely nonparametric, and to be able to learn the rule according to which the hyperedges were generated without prior information.Comment: 8 pages, 5 figure
    corecore