641 research outputs found
Fast branching algorithm for Cluster Vertex Deletion
In the family of clustering problems, we are given a set of objects (vertices
of the graph), together with some observed pairwise similarities (edges). The
goal is to identify clusters of similar objects by slightly modifying the graph
to obtain a cluster graph (disjoint union of cliques). Hueffner et al. [Theory
Comput. Syst. 2010] initiated the parameterized study of Cluster Vertex
Deletion, where the allowed modification is vertex deletion, and presented an
elegant O(2^k * k^9 + n * m)-time fixed-parameter algorithm, parameterized by
the solution size. In our work, we pick up this line of research and present an
O(1.9102^k * (n + m))-time branching algorithm
Normal, Abby Normal, Prefix Normal
A prefix normal word is a binary word with the property that no substring has
more 1s than the prefix of the same length. This class of words is important in
the context of binary jumbled pattern matching. In this paper we present
results about the number of prefix normal words of length , showing
that for some and
. We introduce efficient
algorithms for testing the prefix normal property and a "mechanical algorithm"
for computing prefix normal forms. We also include games which can be played
with prefix normal words. In these games Alice wishes to stay normal but Bob
wants to drive her "abnormal" -- we discuss which parameter settings allow
Alice to succeed.Comment: Accepted at FUN '1
Going weighted: Parameterized algorithms for cluster editing
AbstractThe goal of the Cluster Editing problem is to make the fewest changes to the edge set of an input graph such that the resulting graph is a disjoint union of cliques. This problem is NP-complete but recently, several parameterized algorithms have been proposed. In this paper, we present a number of surprisingly simple search tree algorithms for Weighted Cluster Editing assuming that edge insertion and deletion costs are positive integers. We show that the smallest search tree has size O(1.82k) for edit cost k, resulting in the currently fastest parameterized algorithm, both for this problem and its unweighted counterpart. We have implemented and compared our algorithms, and achieved promising results.11This is an extended version of two articles published in: Proc. of the 6th Asia Pacific Bioinformatics Conference, APBC 2008, in: Series on Advances in Bioinformatics and Computational Biology, vol. 5, Imperial College Press, pp. 211–220; and in: Proc. of the 2nd Conference on Combinatorial Optimization and Applications, COCOA 2008, in: LNCS, vol. 5038, Springer, pp. 289–302
Exploring the limits of the geometric copolymerization model
The geometric copolymerization model is a recently introduced statistical Markov chain model. Here, we investigate its practicality. First, several approaches to identify the optimal model parameters from observed copolymer fingerprints are evaluated using Monte Carlo simulated data. Directly optimizing the parameters is robust against noise but has impractically long running times. A compromise between robustness and running time is found by exploiting the relationship between monomer concentrations calculated by ordinary differential equations and the geometric model. Second, we investigate the applicability of the model to copolymerizations beyond living polymerization and show that the model is useful for copolymerizations involving termination and depropagation reactions
On Symbolic Ultrametrics, Cotree Representations, and Cograph Edge Decompositions and Partitions
Symbolic ultrametrics define edge-colored complete graphs K_n and yield a
simple tree representation of K_n. We discuss, under which conditions this idea
can be generalized to find a symbolic ultrametric that, in addition,
distinguishes between edges and non-edges of arbitrary graphs G=(V,E) and thus,
yielding a simple tree representation of G. We prove that such a symbolic
ultrametric can only be defined for G if and only if G is a so-called cograph.
A cograph is uniquely determined by a so-called cotree. As not all graphs are
cographs, we ask, furthermore, what is the minimum number of cotrees needed to
represent the topology of G. The latter problem is equivalent to find an
optimal cograph edge k-decomposition {E_1,...,E_k} of E so that each subgraph
(V,E_i) of G is a cograph. An upper bound for the integer k is derived and it
is shown that determining whether a graph has a cograph 2-decomposition, resp.,
2-partition is NP-complete
Finding and counting vertex-colored subtrees
The problems studied in this article originate from the Graph Motif problem
introduced by Lacroix et al. in the context of biological networks. The problem
is to decide if a vertex-colored graph has a connected subgraph whose colors
equal a given multiset of colors . It is a graph pattern-matching problem
variant, where the structure of the occurrence of the pattern is not of
interest but the only requirement is the connectedness. Using an algebraic
framework recently introduced by Koutis et al., we obtain new FPT algorithms
for Graph Motif and variants, with improved running times. We also obtain
results on the counting versions of this problem, proving that the counting
problem is FPT if M is a set, but becomes W[1]-hard if M is a multiset with two
colors. Finally, we present an experimental evaluation of this approach on real
datasets, showing that its performance compares favorably with existing
software.Comment: Conference version in International Symposium on Mathematical
Foundations of Computer Science (MFCS), Brno : Czech Republic (2010) Journal
Version in Algorithmic
Swiftly Computing Center Strings
Hufsky F, Kuchenbecker L, Jahn K, Stoye J, Böcker S. Swiftly Computing Center Strings. BMC Bioinformatics. 2011;12(1): 106
Cluster Editing: Kernelization based on Edge Cuts
Kernelization algorithms for the {\sc cluster editing} problem have been a
popular topic in the recent research in parameterized computation. Thus far
most kernelization algorithms for this problem are based on the concept of {\it
critical cliques}. In this paper, we present new observations and new
techniques for the study of kernelization algorithms for the {\sc cluster
editing} problem. Our techniques are based on the study of the relationship
between {\sc cluster editing} and graph edge-cuts. As an application, we
present an -time algorithm that constructs a kernel for the
{\it weighted} version of the {\sc cluster editing} problem. Our result meets
the best kernel size for the unweighted version for the {\sc cluster editing}
problem, and significantly improves the previous best kernel of quadratic size
for the weighted version of the problem
- …