16 research outputs found
Trimmed Moebius Inversion and Graphs of Bounded Degree
We study ways to expedite Yates's algorithm for computing the zeta and
Moebius transforms of a function defined on the subset lattice. We develop a
trimmed variant of Moebius inversion that proceeds point by point, finishing
the calculation at a subset before considering its supersets. For an
-element universe and a family \scr F of its subsets, trimmed Moebius
inversion allows us to compute the number of packings, coverings, and
partitions of with sets from \scr F in time within a polynomial
factor (in ) of the number of supersets of the members of \scr F. Relying
on an intersection theorem of Chung et al. (1986) to bound the sizes of set
families, we apply these ideas to well-studied combinatorial optimisation
problems on graphs of maximum degree . In particular, we show how to
compute the Domatic Number in time within a polynomial factor of
(2^{\Delta+1-2)^{n/(\Delta+1) and the Chromatic Number in time within a
polynomial factor of (2^{\Delta+1-\Delta-1)^{n/(\Delta+1). For any constant
, these bounds are for
independent of the number of vertices
Faster exponential-time algorithms in graphs of bounded average degree
We first show that the Traveling Salesman Problem in an n-vertex graph with
average degree bounded by d can be solved in O*(2^{(1-\eps_d)n}) time and
exponential space for a constant \eps_d depending only on d, where the
O*-notation suppresses factors polynomial in the input size. Thus, we
generalize the recent results of Bjorklund et al. [TALG 2012] on graphs of
bounded degree.
Then, we move to the problem of counting perfect matchings in a graph. We
first present a simple algorithm for counting perfect matchings in an n-vertex
graph in O*(2^{n/2}) time and polynomial space; our algorithm matches the
complexity bounds of the algorithm of Bjorklund [SODA 2012], but relies on
inclusion-exclusion principle instead of algebraic transformations. Building
upon this result, we show that the number of perfect matchings in an n-vertex
graph with average degree bounded by d can be computed in
O*(2^{(1-\eps_{2d})n/2}) time and exponential space, where \eps_{2d} is the
constant obtained by us for the Traveling Salesman Problem in graphs of average
degree at most 2d.
Moreover we obtain a simple algorithm that counts the number of perfect
matchings in an n-vertex bipartite graph of average degree at most d in
O*(2^{(1-1/(3.55d))n/2}) time, improving and simplifying the recent result of
Izumi and Wadayama [FOCS 2012].Comment: 10 page
Families with infants: a general approach to solve hard partition problems
We introduce a general approach for solving partition problems where the goal
is to represent a given set as a union (either disjoint or not) of subsets
satisfying certain properties. Many NP-hard problems can be naturally stated as
such partition problems. We show that if one can find a large enough system of
so-called families with infants for a given problem, then this problem can be
solved faster than by a straightforward algorithm. We use this approach to
improve known bounds for several NP-hard problems as well as to simplify the
proofs of several known results.
For the chromatic number problem we present an algorithm with
time and exponential space for graphs of average
degree . This improves the algorithm by Bj\"{o}rklund et al. [Theory Comput.
Syst. 2010] that works for graphs of bounded maximum (as opposed to average)
degree and closes an open problem stated by Cygan and Pilipczuk [ICALP 2013].
For the traveling salesman problem we give an algorithm working in
time and polynomial space for graphs of average
degree . The previously known results of this kind is a polyspace algorithm
by Bj\"{o}rklund et al. [ICALP 2008] for graphs of bounded maximum degree and
an exponential space algorithm for bounded average degree by Cygan and
Pilipczuk [ICALP 2013].
For counting perfect matching in graphs of average degree~ we present an
algorithm with running time and polynomial
space. Recent algorithms of this kind due to Cygan, Pilipczuk [ICALP 2013] and
Izumi, Wadayama [FOCS 2012] (for bipartite graphs only) use exponential space.Comment: 18 pages, a revised version of this paper is available at
http://arxiv.org/abs/1410.220
Efficient Möbius Transformations and their applications to D-S Theory
International audienceDempster-Shafer Theory (DST) generalizes Bayesian probability theory, offering useful additional information, but suffers from a high computational burden. A lot of work has been done to reduce the complexity of computations used in information fusion with Demp-ster's rule. The main approaches exploit either the structure of Boolean lattices or the information contained in belief sources. Each has its merits depending on the situation. In this paper, we propose sequences of graphs for the computation of the zeta and Möbius transformations that optimally exploit both the structure of distributive lattices and the information contained in belief sources. We call them the Efficient Möbius Transformations (EMT). We show that the complexity of the EMT is always inferior to the complexity of algorithms that consider the whole lattice, such as the Fast Möbius Transform (FMT) for all DST transformations. We then explain how to use them to fuse two belief sources. More generally, our EMTs apply to any function in any finite distributive lattice, focusing on a meet-closed or join-closed subset
A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian Networks
Exact Bayesian structure discovery in Bayesian networks requires exponential
time and space. Using dynamic programming (DP), the fastest known sequential
algorithm computes the exact posterior probabilities of structural features in
time and space, if the number of nodes (variables) in the
Bayesian network is and the in-degree (the number of parents) per node is
bounded by a constant . Here we present a parallel algorithm capable of
computing the exact posterior probabilities for all edges with optimal
parallel space efficiency and nearly optimal parallel time efficiency. That is,
if processors are used, the run-time reduces to
and the space usage becomes per
processor. Our algorithm is based the observation that the subproblems in the
sequential DP algorithm constitute a - hypercube. We take a delicate way
to coordinate the computation of correlated DP procedures such that large
amount of data exchange is suppressed. Further, we develop parallel techniques
for two variants of the well-known \emph{zeta transform}, which have
applications outside the context of Bayesian networks. We demonstrate the
capability of our algorithm on datasets with up to 33 variables and its
scalability on up to 2048 processors. We apply our algorithm to a biological
data set for discovering the yeast pheromone response pathways.Comment: 32 pages, 12 figure