564 research outputs found
On Computing Maximal Independent Sets of Hypergraphs in Parallel
Whether or not the problem of finding maximal independent sets (MIS) in
hypergraphs is in (R)NC is one of the fundamental problems in the theory of
parallel computing. Unlike the well-understood case of MIS in graphs, for the
hypergraph problem, our knowledge is quite limited despite considerable work.
It is known that the problem is in \emph{RNC} when the edges of the hypergraph
have constant size. For general hypergraphs with vertices and edges,
the fastest previously known algorithm works in time with
processors. In this paper we give an EREW PRAM algorithm
that works in time with processors on general
hypergraphs satisfying , where
and . Our algorithm is
based on a sampling idea that reduces the dimension of the hypergraph and
employs the algorithm for constant dimension hypergraphs as a subroutine
Importance sampling the union of rare events with an application to power systems analysis
We consider importance sampling to estimate the probability of a union
of rare events defined by a random variable . The
sampler we study has been used in spatial statistics, genomics and
combinatorics going back at least to Karp and Luby (1983). It works by sampling
one event at random, then sampling conditionally on that event
happening and it constructs an unbiased estimate of by multiplying an
inverse moment of the number of occuring events by the union bound. We prove
some variance bounds for this sampler. For a sample size of , it has a
variance no larger than where is the union
bound. It also has a coefficient of variation no larger than
regardless of the overlap pattern among the
events. Our motivating problem comes from power system reliability, where the
phase differences between connected nodes have a joint Gaussian distribution
and the rare events arise from unacceptably large phase differences. In the
grid reliability problems even some events defined by constraints in
dimensions, with probability below , are estimated with a
coefficient of variation of about with only sample
values
DNF Sparsification and a Faster Deterministic Counting Algorithm
Given a DNF formula on n variables, the two natural size measures are the
number of terms or size s(f), and the maximum width of a term w(f). It is
folklore that short DNF formulas can be made narrow. We prove a converse,
showing that narrow formulas can be sparsified. More precisely, any width w DNF
irrespective of its size can be -approximated by a width DNF with
at most terms.
We combine our sparsification result with the work of Luby and Velikovic to
give a faster deterministic algorithm for approximately counting the number of
satisfying solutions to a DNF. Given a formula on n variables with poly(n)
terms, we give a deterministic time algorithm
that computes an additive approximation to the fraction of
satisfying assignments of f for \epsilon = 1/\poly(\log n). The previous best
result due to Luby and Velickovic from nearly two decades ago had a run-time of
.Comment: To appear in the IEEE Conference on Computational Complexity, 201
DNF Sampling for ProbLog Inference
Inference in probabilistic logic languages such as ProbLog, an extension of
Prolog with probabilistic facts, is often based on a reduction to a
propositional formula in DNF. Calculating the probability of such a formula
involves the disjoint-sum-problem, which is computationally hard. In this work
we introduce a new approximation method for ProbLog inference which exploits
the DNF to focus sampling. While this DNF sampling technique has been applied
to a variety of tasks before, to the best of our knowledge it has not been used
for inference in probabilistic logic systems. The paper also presents an
experimental comparison with another sampling based inference method previously
introduced for ProbLog.Comment: Online proceedings of the Joint Workshop on Implementation of
Constraint Logic Programming Systems and Logic-based Methods in Programming
Environments (CICLOPS-WLPE 2010), Edinburgh, Scotland, U.K., July 15, 201
Learning to Reason: Leveraging Neural Networks for Approximate DNF Counting
Weighted model counting (WMC) has emerged as a prevalent approach for
probabilistic inference. In its most general form, WMC is #P-hard. Weighted DNF
counting (weighted #DNF) is a special case, where approximations with
probabilistic guarantees are obtained in O(nm), where n denotes the number of
variables, and m the number of clauses of the input DNF, but this is not
scalable in practice. In this paper, we propose a neural model counting
approach for weighted #DNF that combines approximate model counting with deep
learning, and accurately approximates model counts in linear time when width is
bounded. We conduct experiments to validate our method, and show that our model
learns and generalizes very well to large-scale #DNF instances.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI-20). Code and data available at:
https://github.com/ralphabb/NeuralDNF
On Hashing-Based Approaches to Approximate DNF-Counting
Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and
probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along
with a new hash family of Row-Echelon hash functions. These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity (up to polylog factors) as that
of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting
- …