8,391 research outputs found

    The Complexity of Approximately Counting Tree Homomorphisms

    Full text link
    We study two computational problems, parameterised by a fixed tree H. #HomsTo(H) is the problem of counting homomorphisms from an input graph G to H. #WHomsTo(H) is the problem of counting weighted homomorphisms to H, given an input graph G and a weight function for each vertex v of G. Even though H is a tree, these problems turn out to be sufficiently rich to capture all of the known approximation behaviour in #P. We give a complete trichotomy for #WHomsTo(H). If H is a star then #WHomsTo(H) is in FP. If H is not a star but it does not contain a certain induced subgraph J_3 then #WHomsTo(H) is equivalent under approximation-preserving (AP) reductions to #BIS, the problem of counting independent sets in a bipartite graph. This problem is complete for the class #RHPi_1 under AP-reductions. Finally, if H contains an induced J_3 then #WHomsTo(H) is equivalent under AP-reductions to #SAT, the problem of counting satisfying assignments to a CNF Boolean formula. Thus, #WHomsTo(H) is complete for #P under AP-reductions. The results are similar for #HomsTo(H) except that a rich structure emerges if H contains an induced J_3. We show that there are trees H for which #HomsTo(H) is #SAT-equivalent (disproving a plausible conjecture of Kelk). There is an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the ferromagnetic Potts model. In particular, we show that for a family of graphs J_q, parameterised by a positive integer q, the problem #HomsTo(H) is AP-interreducible with the problem of approximating the partition function of the q-state Potts model. It was not previously known that the Potts model had a homomorphism-counting interpretation. We use this connection to obtain some additional upper bounds for the approximation complexity of #HomsTo(J_q)

    Hardness of submodular cost allocation : lattice matching and a simplex coloring conjecture

    Get PDF
    We consider the Minimum Submodular Cost Allocation (MSCA) problem. In this problem, we are given k submodular cost functions f1, ... , fk: 2V -> R+ and the goal is to partition V into k sets A1, ..., Ak so as to minimize the total cost sumi = 1,k fi(Ai). We show that MSCA is inapproximable within any multiplicative factor even in very restricted settings; prior to our work, only Set Cover hardness was known. In light of this negative result, we turn our attention to special cases of the problem. We consider the setting in which each function fi satisfies fi = gi + h, where each gi is monotone submodular and h is (possibly non-monotone) submodular. We give an O(k log |V|) approximation for this problem. We provide some evidence that a factor of k may be necessary, even in the special case of HyperLabel. In particular, we formulate a simplex-coloring conjecture that implies a Unique-Games-hardness of (k - 1 - epsilon) for k-uniform HyperLabel and label set [k]. We provide a proof of the simplex-coloring conjecture for k=3

    From Sparse Signals to Sparse Residuals for Robust Sensing

    Full text link
    One of the key challenges in sensor networks is the extraction of information by fusing data from a multitude of distinct, but possibly unreliable sensors. Recovering information from the maximum number of dependable sensors while specifying the unreliable ones is critical for robust sensing. This sensing task is formulated here as that of finding the maximum number of feasible subsystems of linear equations, and proved to be NP-hard. Useful links are established with compressive sampling, which aims at recovering vectors that are sparse. In contrast, the signals here are not sparse, but give rise to sparse residuals. Capitalizing on this form of sparsity, four sensing schemes with complementary strengths are developed. The first scheme is a convex relaxation of the original problem expressed as a second-order cone program (SOCP). It is shown that when the involved sensing matrices are Gaussian and the reliable measurements are sufficiently many, the SOCP can recover the optimal solution with overwhelming probability. The second scheme is obtained by replacing the initial objective function with a concave one. The third and fourth schemes are tailored for noisy sensor data. The noisy case is cast as a combinatorial problem that is subsequently surrogated by a (weighted) SOCP. Interestingly, the derived cost functions fall into the framework of robust multivariate linear regression, while an efficient block-coordinate descent algorithm is developed for their minimization. The robust sensing capabilities of all schemes are verified by simulated tests.Comment: Under review for publication in the IEEE Transactions on Signal Processing (revised version

    Semi-Streaming Set Cover

    Full text link
    This paper studies the set cover problem under the semi-streaming model. The underlying set system is formalized in terms of a hypergraph G=(V,E)G = (V, E) whose edges arrive one-by-one and the goal is to construct an edge cover F⊆EF \subseteq E with the objective of minimizing the cardinality (or cost in the weighted case) of FF. We consider a parameterized relaxation of this problem, where given some 0≀ϔ<10 \leq \epsilon < 1, the goal is to construct an edge (1−ϔ)(1 - \epsilon)-cover, namely, a subset of edges incident to all but an Ï”\epsilon-fraction of the vertices (or their benefit in the weighted case). The key limitation imposed on the algorithm is that its space is limited to (poly)logarithmically many bits per vertex. Our main result is an asymptotically tight trade-off between Ï”\epsilon and the approximation ratio: We design a semi-streaming algorithm that on input graph GG, constructs a succinct data structure D\mathcal{D} such that for every 0≀ϔ<10 \leq \epsilon < 1, an edge (1−ϔ)(1 - \epsilon)-cover that approximates the optimal edge \mbox{(11-)cover} within a factor of f(Ï”,n)f(\epsilon, n) can be extracted from D\mathcal{D} (efficiently and with no additional space requirements), where f(Ï”,n)={O(1/Ï”),if ϔ>1/nO(n),otherwise . f(\epsilon, n) = \left\{ \begin{array}{ll} O (1 / \epsilon), & \text{if } \epsilon > 1 / \sqrt{n} \\ O (\sqrt{n}), & \text{otherwise} \end{array} \right. \, . In particular for the traditional set cover problem we obtain an O(n)O(\sqrt{n})-approximation. This algorithm is proved to be best possible by establishing a family (parameterized by Ï”\epsilon) of matching lower bounds.Comment: Full version of the extended abstract that will appear in Proceedings of ICALP 2014 track

    Combining Voting Rules Together

    Full text link
    We propose a simple method for combining together voting rules that performs a run-off between the different winners of each voting rule. We prove that this combinator has several good properties. For instance, even if just one of the base voting rules has a desirable property like Condorcet consistency, the combination inherits this property. In addition, we prove that combining voting rules together in this way can make finding a manipulation more computationally difficult. Finally, we study the impact of this combinator on approximation methods that find close to optimal manipulations

    On the Hardness of Partially Dynamic Graph Problems and Connections to Diameter

    Get PDF
    Conditional lower bounds for dynamic graph problems has received a great deal of attention in recent years. While many results are now known for the fully-dynamic case and such bounds often imply worst-case bounds for the partially dynamic setting, it seems much more difficult to prove amortized bounds for incremental and decremental algorithms. In this paper we consider partially dynamic versions of three classic problems in graph theory. Based on popular conjectures we show that: -- No algorithm with amortized update time O(n1−Δ)O(n^{1-\varepsilon}) exists for incremental or decremental maximum cardinality bipartite matching. This significantly improves on the O(m1/2−Δ)O(m^{1/2-\varepsilon}) bound for sparse graphs of Henzinger et al. [STOC'15] and O(n1/3−Δ)O(n^{1/3-\varepsilon}) bound of Kopelowitz, Pettie and Porat. Our linear bound also appears more natural. In addition, the result we present separates the node-addition model from the edge insertion model, as an algorithm with total update time O(mn)O(m\sqrt{n}) exists for the former by Bosek et al. [FOCS'14]. -- No algorithm with amortized update time O(m1−Δ)O(m^{1-\varepsilon}) exists for incremental or decremental maximum flow in directed and weighted sparse graphs. No such lower bound was known for partially dynamic maximum flow previously. Furthermore no algorithm with amortized update time O(n1−Δ)O(n^{1-\varepsilon}) exists for directed and unweighted graphs or undirected and weighted graphs. -- No algorithm with amortized update time O(n1/2−Δ)O(n^{1/2 - \varepsilon}) exists for incremental or decremental (4/3−Δâ€Č)(4/3-\varepsilon')-approximating the diameter of an unweighted graph. We also show a slightly stronger bound if node additions are allowed. [...]Comment: To appear at ICALP'16. Abstract truncated to fit arXiv limit
    • 

    corecore