1,193 research outputs found

    Optimal Query Complexity for Reconstructing Hypergraphs

    Get PDF
    In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let GG be a weighted hidden hypergraph of constant rank with n vertices and mm hyperedges. For any mm there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlognlogm) O(\frac{m\log n}{\log m}) additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008]. When the weights of the hypergraph are integers that are less than O(poly(nd/m))O(poly(n^d/m)) where dd is the rank of the hypergraph (and therefore for unweighted hypergraphs) there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlogndmlogm). O(\frac{m\log \frac{n^d}{m}}{\log m}). additive queries. Using the information theoretic bound the above query complexities are tight

    Spectral Detection on Sparse Hypergraphs

    Get PDF
    We consider the problem of the assignment of nodes into communities from a set of hyperedges, where every hyperedge is a noisy observation of the community assignment of the adjacent nodes. We focus in particular on the sparse regime where the number of edges is of the same order as the number of vertices. We propose a spectral method based on a generalization of the non-backtracking Hashimoto matrix into hypergraphs. We analyze its performance on a planted generative model and compare it with other spectral methods and with Bayesian belief propagation (which was conjectured to be asymptotically optimal for this model). We conclude that the proposed spectral method detects communities whenever belief propagation does, while having the important advantages to be simpler, entirely nonparametric, and to be able to learn the rule according to which the hyperedges were generated without prior information.Comment: 8 pages, 5 figure

    Provable Bounds for Learning Some Deep Representations

    Full text link
    We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an nn node multilayer neural net that has degree at most nγn^{\gamma} for some γ<1\gamma <1 and each edge has a random edge weight in [1,1][-1,1]. Our algorithm learns {\em almost all} networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.Comment: The first 18 pages serve as an extended abstract and a 36 pages long technical appendix follow

    Consistency of Spectral Hypergraph Partitioning under Planted Partition Model

    Full text link
    Hypergraph partitioning lies at the heart of a number of problems in machine learning and network sciences. Many algorithms for hypergraph partitioning have been proposed that extend standard approaches for graph partitioning to the case of hypergraphs. However, theoretical aspects of such methods have seldom received attention in the literature as compared to the extensive studies on the guarantees of graph partitioning. For instance, consistency results of spectral graph partitioning under the stochastic block model are well known. In this paper, we present a planted partition model for sparse random non-uniform hypergraphs that generalizes the stochastic block model. We derive an error bound for a spectral hypergraph partitioning algorithm under this model using matrix concentration inequalities. To the best of our knowledge, this is the first consistency result related to partitioning non-uniform hypergraphs.Comment: 35 pages, 2 figures, 1 tabl

    Learning and Testing Variable Partitions

    Get PDF
    Let FF be a multivariate function from a product set Σn\Sigma^n to an Abelian group GG. A kk-partition of FF with cost δ\delta is a partition of the set of variables V\mathbf{V} into kk non-empty subsets (X1,,Xk)(\mathbf{X}_1, \dots, \mathbf{X}_k) such that F(V)F(\mathbf{V}) is δ\delta-close to F1(X1)++Fk(Xk)F_1(\mathbf{X}_1)+\dots+F_k(\mathbf{X}_k) for some F1,,FkF_1, \dots, F_k with respect to a given error metric. We study algorithms for agnostically learning kk partitions and testing kk-partitionability over various groups and error metrics given query access to FF. In particular we show that 1.1. Given a function that has a kk-partition of cost δ\delta, a partition of cost O(kn2)(δ+ϵ)\mathcal{O}(k n^2)(\delta + \epsilon) can be learned in time O~(n2poly(1/ϵ))\tilde{\mathcal{O}}(n^2 \mathrm{poly} (1/\epsilon)) for any ϵ>0\epsilon > 0. In contrast, for k=2k = 2 and n=3n = 3 learning a partition of cost δ+ϵ\delta + \epsilon is NP-hard. 2.2. When FF is real-valued and the error metric is the 2-norm, a 2-partition of cost δ2+ϵ\sqrt{\delta^2 + \epsilon} can be learned in time O~(n5/ϵ2)\tilde{\mathcal{O}}(n^5/\epsilon^2). 3.3. When FF is Zq\mathbb{Z}_q-valued and the error metric is Hamming weight, kk-partitionability is testable with one-sided error and O(kn3/ϵ)\mathcal{O}(kn^3/\epsilon) non-adaptive queries. We also show that even two-sided testers require Ω(n)\Omega(n) queries when k=2k = 2. This work was motivated by reinforcement learning control tasks in which the set of control variables can be partitioned. The partitioning reduces the task into multiple lower-dimensional ones that are relatively easier to learn. Our second algorithm empirically increases the scores attained over previous heuristic partitioning methods applied in this context.Comment: Innovations in Theoretical Computer Science (ITCS) 202

    Duality of Graphical Models and Tensor Networks

    Full text link
    In this article we show the duality between tensor networks and undirected graphical models with discrete variables. We study tensor networks on hypergraphs, which we call tensor hypernetworks. We show that the tensor hypernetwork on a hypergraph exactly corresponds to the graphical model given by the dual hypergraph. We translate various notions under duality. For example, marginalization in a graphical model is dual to contraction in the tensor network. Algorithms also translate under duality. We show that belief propagation corresponds to a known algorithm for tensor network contraction. This article is a reminder that the research areas of graphical models and tensor networks can benefit from interaction
    corecore