9,633 research outputs found

    Distribution-Aware Sampling and Weighted Model Counting for SAT

    Full text link
    Given a CNF formula and a weight for each assignment of values to variables, two natural problems are weighted model counting and distribution-aware sampling of satisfying assignments. Both problems have a wide variety of important applications. Due to the inherent complexity of the exact versions of the problems, interest has focused on solving them approximately. Prior work in this area scaled only to small problems in practice, or failed to provide strong theoretical guarantees, or employed a computationally-expensive maximum a posteriori probability (MAP) oracle that assumes prior knowledge of a factored representation of the weight distribution. We present a novel approach that works with a black-box oracle for weights of assignments and requires only an {\NP}-oracle (in practice, a SAT-solver) to solve both the counting and sampling problems. Our approach works under mild assumptions on the distribution of weights of satisfying assignments, provides strong theoretical guarantees, and scales to problems involving several thousand variables. We also show that the assumptions can be significantly relaxed while improving computational efficiency if a factored representation of the weights is known.Comment: This is a full version of AAAI 2014 pape

    Flexible constrained sampling with guarantees for pattern mining

    Get PDF
    Pattern sampling has been proposed as a potential solution to the infamous pattern explosion. Instead of enumerating all patterns that satisfy the constraints, individual patterns are sampled proportional to a given quality measure. Several sampling algorithms have been proposed, but each of them has its limitations when it comes to 1) flexibility in terms of quality measures and constraints that can be used, and/or 2) guarantees with respect to sampling accuracy. We therefore present Flexics, the first flexible pattern sampler that supports a broad class of quality measures and constraints, while providing strong guarantees regarding sampling accuracy. To achieve this, we leverage the perspective on pattern mining as a constraint satisfaction problem and build upon the latest advances in sampling solutions in SAT as well as existing pattern mining algorithms. Furthermore, the proposed algorithm is applicable to a variety of pattern languages, which allows us to introduce and tackle the novel task of sampling sets of patterns. We introduce and empirically evaluate two variants of Flexics: 1) a generic variant that addresses the well-known itemset sampling task and the novel pattern set sampling task as well as a wide range of expressive constraints within these tasks, and 2) a specialized variant that exploits existing frequent itemset techniques to achieve substantial speed-ups. Experiments show that Flexics is both accurate and efficient, making it a useful tool for pattern-based data exploration.Comment: Accepted for publication in Data Mining & Knowledge Discovery journal (ECML/PKDD 2017 journal track

    Learning what matters - Sampling interesting patterns

    Get PDF
    In the field of exploratory data mining, local structure in data can be described by patterns and discovered by mining algorithms. Although many solutions have been proposed to address the redundancy problems in pattern mining, most of them either provide succinct pattern sets or take the interests of the user into account-but not both. Consequently, the analyst has to invest substantial effort in identifying those patterns that are relevant to her specific interests and goals. To address this problem, we propose a novel approach that combines pattern sampling with interactive data mining. In particular, we introduce the LetSIP algorithm, which builds upon recent advances in 1) weighted sampling in SAT and 2) learning to rank in interactive pattern mining. Specifically, it exploits user feedback to directly learn the parameters of the sampling distribution that represents the user's interests. We compare the performance of the proposed algorithm to the state-of-the-art in interactive pattern mining by emulating the interests of a user. The resulting system allows efficient and interleaved learning and sampling, thus user-specific anytime data exploration. Finally, LetSIP demonstrates favourable trade-offs concerning both quality-diversity and exploitation-exploration when compared to existing methods.Comment: PAKDD 2017, extended versio

    Induction of Interpretable Possibilistic Logic Theories from Relational Data

    Full text link
    The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201

    The Complexity of Approximately Counting Tree Homomorphisms

    Full text link
    We study two computational problems, parameterised by a fixed tree H. #HomsTo(H) is the problem of counting homomorphisms from an input graph G to H. #WHomsTo(H) is the problem of counting weighted homomorphisms to H, given an input graph G and a weight function for each vertex v of G. Even though H is a tree, these problems turn out to be sufficiently rich to capture all of the known approximation behaviour in #P. We give a complete trichotomy for #WHomsTo(H). If H is a star then #WHomsTo(H) is in FP. If H is not a star but it does not contain a certain induced subgraph J_3 then #WHomsTo(H) is equivalent under approximation-preserving (AP) reductions to #BIS, the problem of counting independent sets in a bipartite graph. This problem is complete for the class #RHPi_1 under AP-reductions. Finally, if H contains an induced J_3 then #WHomsTo(H) is equivalent under AP-reductions to #SAT, the problem of counting satisfying assignments to a CNF Boolean formula. Thus, #WHomsTo(H) is complete for #P under AP-reductions. The results are similar for #HomsTo(H) except that a rich structure emerges if H contains an induced J_3. We show that there are trees H for which #HomsTo(H) is #SAT-equivalent (disproving a plausible conjecture of Kelk). There is an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the ferromagnetic Potts model. In particular, we show that for a family of graphs J_q, parameterised by a positive integer q, the problem #HomsTo(H) is AP-interreducible with the problem of approximating the partition function of the q-state Potts model. It was not previously known that the Potts model had a homomorphism-counting interpretation. We use this connection to obtain some additional upper bounds for the approximation complexity of #HomsTo(J_q)

    Probabilistic Model Counting with Short XORs

    Full text link
    The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani, showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints in that construction have the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is based on the realization that the essential feature for random systems of parity constraints to be useful in probabilistic model counting is that the geometry of their set of solutions resembles an error-correcting code.Comment: To appear in SAT 1
    corecore