849,849 research outputs found

    The Complexity of Approximately Counting Stable Roommate Assignments

    Get PDF
    We investigate the complexity of approximately counting stable roommate assignments in two models: (i) the kk-attribute model, in which the preference lists are determined by dot products of "preference vectors" with "attribute vectors" and (ii) the kk-Euclidean model, in which the preference lists are determined by the closeness of the "positions" of the people to their "preferred positions". Exactly counting the number of assignments is #P-complete, since Irving and Leather demonstrated #P-completeness for the special case of the stable marriage problem. We show that counting the number of stable roommate assignments in the kk-attribute model (k4k \geq 4) and the 3-Euclidean model(k3k \geq 3) is interreducible, in an approximation-preserving sense, with counting independent sets (of all sizes) (#IS) in a graph, or counting the number of satisfying assignments of a Boolean formula (#SAT). This means that there can be no FPRAS for any of these problems unless NP=RP. As a consequence, we infer that there is no FPRAS for counting stable roommate assignments (#SR) unless NP=RP. Utilizing previous results by the authors, we give an approximation-preserving reduction from counting the number of independent sets in a bipartite graph (#BIS) to counting the number of stable roommate assignments both in the 3-attribute model and in the 2-Euclidean model. #BIS is complete with respect to approximation-preserving reductions in the logically-defined complexity class #RH\Pi_1. Hence, our result shows that an FPRAS for counting stable roommate assignments in the 3-attribute model would give an FPRAS for all of #RH\Pi_1. We also show that the 1-attribute stable roommate problem always has either one or two stable roommate assignments, so the number of assignments can be determined exactly in polynomial time

    Hypergraph Acyclicity and Propositional Model Counting

    Full text link
    We show that the propositional model counting problem #SAT for CNF- formulas with hypergraphs that allow a disjoint branches decomposition can be solved in polynomial time. We show that this class of hypergraphs is incomparable to hypergraphs of bounded incidence cliquewidth which were the biggest class of hypergraphs for which #SAT was known to be solvable in polynomial time so far. Furthermore, we present a polynomial time algorithm that computes a disjoint branches decomposition of a given hypergraph if it exists and rejects otherwise. Finally, we show that some slight extensions of the class of hypergraphs with disjoint branches decompositions lead to intractable #SAT, leaving open how to generalize the counting result of this paper

    Class-Agnostic Counting

    Full text link
    Nearly all existing counting methods are designed for a specific object class. Our work, however, aims to create a counting model able to count any class of object. To achieve this goal, we formulate counting as a matching problem, enabling us to exploit the image self-similarity property that naturally exists in object counting problems. We make the following three contributions: first, a Generic Matching Network (GMN) architecture that can potentially count any object in a class-agnostic manner; second, by reformulating the counting problem as one of matching objects, we can take advantage of the abundance of video data labeled for tracking, which contains natural repetitions suitable for training a counting model. Such data enables us to train the GMN. Third, to customize the GMN to different user requirements, an adapter module is used to specialize the model with minimal effort, i.e. using a few labeled examples, and adapting only a small fraction of the trained parameters. This is a form of few-shot learning, which is practical for domains where labels are limited due to requiring expert knowledge (e.g. microbiology). We demonstrate the flexibility of our method on a diverse set of existing counting benchmarks: specifically cells, cars, and human crowds. The model achieves competitive performance on cell and crowd counting datasets, and surpasses the state-of-the-art on the car dataset using only three training images. When training on the entire dataset, the proposed method outperforms all previous methods by a large margin.Comment: Asian Conference on Computer Vision (ACCV), 201

    Probabilistic Model Counting with Short XORs

    Full text link
    The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani, showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints in that construction have the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is based on the realization that the essential feature for random systems of parity constraints to be useful in probabilistic model counting is that the geometry of their set of solutions resembles an error-correcting code.Comment: To appear in SAT 1

    Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting

    Full text link
    Weighted model counting (WMC) has emerged as a prevalent approach for probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF counting (weighted #DNF) is a special case, where approximations with probabilistic guarantees are obtained in O(nm), where n denotes the number of variables, and m the number of clauses of the input DNF, but this is not scalable in practice. In this paper, we propose a neural model counting approach for weighted #DNF that combines approximate model counting with deep learning, and accurately approximates model counts in linear time when width is bounded. We conduct experiments to validate our method, and show that our model learns and generalizes very well to large-scale #DNF instances.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20). Code and data available at: https://github.com/ralphabb/NeuralDNF
    corecore