21,003 research outputs found
Potential Maximal Clique Algorithms for Perfect Phylogeny Problems
Kloks, Kratsch, and Spinrad showed how treewidth and minimum-fill, NP-hard
combinatorial optimization problems related to minimal triangulations, are
broken into subproblems by block subgraphs defined by minimal separators. These
ideas were expanded on by Bouchitt\'e and Todinca, who used potential maximal
cliques to solve these problems using a dynamic programming approach in time
polynomial in the number of minimal separators of a graph. It is known that
solutions to the perfect phylogeny problem, maximum compatibility problem, and
unique perfect phylogeny problem are characterized by minimal triangulations of
the partition intersection graph. In this paper, we show that techniques
similar to those proposed by Bouchitt\'e and Todinca can be used to solve the
perfect phylogeny problem with missing data, the two- state maximum
compatibility problem with missing data, and the unique perfect phylogeny
problem with missing data in time polynomial in the number of minimal
separators of the partition intersection graph
Polynomial-time algorithm for Maximum Weight Independent Set on -free graphs
In the classic Maximum Weight Independent Set problem we are given a graph
with a nonnegative weight function on vertices, and the goal is to find an
independent set in of maximum possible weight. While the problem is NP-hard
in general, we give a polynomial-time algorithm working on any -free
graph, that is, a graph that has no path on vertices as an induced
subgraph. This improves the polynomial-time algorithm on -free graphs of
Lokshtanov et al. (SODA 2014), and the quasipolynomial-time algorithm on
-free graphs of Lokshtanov et al (SODA 2016). The main technical
contribution leading to our main result is enumeration of a polynomial-size
family of vertex subsets with the following property: for every
maximal independent set in the graph, contains all maximal
cliques of some minimal chordal completion of that does not add any edge
incident to a vertex of
Fixed-Parameter Tractability of Directed Multiway Cut Parameterized by the Size of the Cutset
Given a directed graph , a set of terminals and an integer , the
\textsc{Directed Vertex Multiway Cut} problem asks if there is a set of at
most (nonterminal) vertices whose removal disconnects each terminal from
all other terminals. \textsc{Directed Edge Multiway Cut} is the analogous
problem where is a set of at most edges. These two problems indeed are
known to be equivalent. A natural generalization of the multiway cut is the
\emph{multicut} problem, in which we want to disconnect only a set of given
pairs instead of all pairs. Marx (Theor. Comp. Sci. 2006) showed that in
undirected graphs multiway cut is fixed-parameter tractable (FPT) parameterized
by . Marx and Razgon (STOC 2011) showed that undirected multicut is FPT and
directed multicut is W[1]-hard parameterized by . We complete the picture
here by our main result which is that both \textsc{Directed Vertex Multiway
Cut} and \textsc{Directed Edge Multiway Cut} can be solved in time
, i.e., FPT parameterized by size of the cutset of
the solution. This answers an open question raised by Marx (Theor. Comp. Sci.
2006) and Marx and Razgon (STOC 2011). It follows from our result that
\textsc{Directed Multicut} is FPT for the case of terminal pairs, which
answers another open problem raised in Marx and Razgon (STOC 2011)
Fast Algorithms for Parameterized Problems with Relaxed Disjointness Constraints
In parameterized complexity, it is a natural idea to consider different
generalizations of classic problems. Usually, such generalization are obtained
by introducing a "relaxation" variable, where the original problem corresponds
to setting this variable to a constant value. For instance, the problem of
packing sets of size at most into a given universe generalizes the Maximum
Matching problem, which is recovered by taking . Most often, the
complexity of the problem increases with the relaxation variable, but very
recently Abasi et al. have given a surprising example of a problem ---
-Simple -Path --- that can be solved by a randomized algorithm with
running time . That is, the complexity of the
problem decreases with . In this paper we pursue further the direction
sketched by Abasi et al. Our main contribution is a derandomization tool that
provides a deterministic counterpart of the main technical result of Abasi et
al.: the algorithm for -Monomial
Detection, which is the problem of finding a monomial of total degree and
individual degrees at most in a polynomial given as an arithmetic circuit.
Our technique works for a large class of circuits, and in particular it can be
used to derandomize the result of Abasi et al. for -Simple -Path. On our
way to this result we introduce the notion of representative sets for
multisets, which may be of independent interest. Finally, we give two more
examples of problems that were already studied in the literature, where the
same relaxation phenomenon happens. The first one is a natural relaxation of
the Set Packing problem, where we allow the packed sets to overlap at each
element at most times. The second one is Degree Bounded Spanning Tree,
where we seek for a spanning tree of the graph with a small maximum degree
A Backtracking-Based Algorithm for Computing Hypertree-Decompositions
Hypertree decompositions of hypergraphs are a generalization of tree
decompositions of graphs. The corresponding hypertree-width is a measure for
the cyclicity and therefore tractability of the encoded computation problem.
Many NP-hard decision and computation problems are known to be tractable on
instances whose structure corresponds to hypergraphs of bounded
hypertree-width. Intuitively, the smaller the hypertree-width, the faster the
computation problem can be solved. In this paper, we present the new
backtracking-based algorithm det-k-decomp for computing hypertree
decompositions of small width. Our benchmark evaluations have shown that
det-k-decomp significantly outperforms opt-k-decomp, the only exact hypertree
decomposition algorithm so far. Even compared to the best heuristic algorithm,
we obtained competitive results as long as the hypergraphs are not too large.Comment: 19 pages, 6 figures, 3 table
The succinctness of first-order logic on linear orders
Succinctness is a natural measure for comparing the strength of different logics. Intuitively, a logic L_1 is more succinct than another logic L_2 if all properties that can be expressed in L_2 can be expressed in L_1 by formulas of (approximately) the same size, but some properties can be expressed in L_1 by (significantly) smaller formulas.
We study the succinctness of logics on linear orders. Our first theorem is concerned with the finite variable fragments of first-order logic. We prove that:
(i) Up to a polynomial factor, the 2- and the 3-variable fragments of first-order logic on linear orders have the same succinctness. (ii) The 4-variable fragment is exponentially more succinct than the 3-variable fragment. Our second main result compares the succinctness of first-order logic on linear orders with that of monadic second-order logic. We prove that the fragment of monadic second-order logic that has the same expressiveness as first-order logic on linear orders is non-elementarily more succinct than first-order logic
- …