1,106 research outputs found

    Optimal Query Complexity for Reconstructing Hypergraphs

    Get PDF
    In this paper we consider the problem of reconstructing a hidden weighted hypergraph of constant rank using additive queries. We prove the following: Let GG be a weighted hidden hypergraph of constant rank with n vertices and mm hyperedges. For any mm there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlognlogm) O(\frac{m\log n}{\log m}) additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008]. When the weights of the hypergraph are integers that are less than O(poly(nd/m))O(poly(n^d/m)) where dd is the rank of the hypergraph (and therefore for unweighted hypergraphs) there exists a non-adaptive algorithm that finds the edges of the graph and their weights using O(mlogndmlogm). O(\frac{m\log \frac{n^d}{m}}{\log m}). additive queries. Using the information theoretic bound the above query complexities are tight

    Strong Products of Hypergraphs: Unique Prime Factorization Theorems and Algorithms

    Full text link
    It is well-known that all finite connected graphs have a unique prime factor decomposition (PFD) with respect to the strong graph product which can be computed in polynomial time. Essential for the PFD computation is the construction of the so-called Cartesian skeleton of the graphs under investigation. In this contribution, we show that every connected thin hypergraph H has a unique prime factorization with respect to the normal and strong (hypergraph) product. Both products coincide with the usual strong graph product whenever H is a graph. We introduce the notion of the Cartesian skeleton of hypergraphs as a natural generalization of the Cartesian skeleton of graphs and prove that it is uniquely defined for thin hypergraphs. Moreover, we show that the Cartesian skeleton of hypergraphs can be determined in O(|E|^2) time and that the PFD can be computed in O(|V|^2|E|) time, for hypergraphs H = (V,E) with bounded degree and bounded rank

    Risk-Averse Matchings over Uncertain Graph Databases

    Full text link
    A large number of applications such as querying sensor networks, and analyzing protein-protein interaction (PPI) networks, rely on mining uncertain graph and hypergraph databases. In this work we study the following problem: given an uncertain, weighted (hyper)graph, how can we efficiently find a (hyper)matching with high expected reward, and low risk? This problem naturally arises in the context of several important applications, such as online dating, kidney exchanges, and team formation. We introduce a novel formulation for finding matchings with maximum expected reward and bounded risk under a general model of uncertain weighted (hyper)graphs that we introduce in this work. Our model generalizes probabilistic models used in prior work, and captures both continuous and discrete probability distributions, thus allowing to handle privacy related applications that inject appropriately distributed noise to (hyper)edge weights. Given that our optimization problem is NP-hard, we turn our attention to designing efficient approximation algorithms. For the case of uncertain weighted graphs, we provide a 13\frac{1}{3}-approximation algorithm, and a 15\frac{1}{5}-approximation algorithm with near optimal run time. For the case of uncertain weighted hypergraphs, we provide a Ω(1k)\Omega(\frac{1}{k})-approximation algorithm, where kk is the rank of the hypergraph (i.e., any hyperedge includes at most kk nodes), that runs in almost (modulo log factors) linear time. We complement our theoretical results by testing our approximation algorithms on a wide variety of synthetic experiments, where we observe in a controlled setting interesting findings on the trade-off between reward, and risk. We also provide an application of our formulation for providing recommendations of teams that are likely to collaborate, and have high impact.Comment: 25 page

    Boxicity and separation dimension

    Full text link
    A family F\mathcal{F} of permutations of the vertices of a hypergraph HH is called 'pairwise suitable' for HH if, for every pair of disjoint edges in HH, there exists a permutation in F\mathcal{F} in which all the vertices in one edge precede those in the other. The cardinality of a smallest such family of permutations for HH is called the 'separation dimension' of HH and is denoted by π(H)\pi(H). Equivalently, π(H)\pi(H) is the smallest natural number kk so that the vertices of HH can be embedded in Rk\mathbb{R}^k such that any two disjoint edges of HH can be separated by a hyperplane normal to one of the axes. We show that the separation dimension of a hypergraph HH is equal to the 'boxicity' of the line graph of HH. This connection helps us in borrowing results and techniques from the extensive literature on boxicity to study the concept of separation dimension.Comment: This is the full version of a paper by the same name submitted to WG-2014. Some results proved in this paper are also present in arXiv:1212.6756. arXiv admin note: substantial text overlap with arXiv:1212.675
    corecore