85 research outputs found
Fast Biclustering by Dual Parameterization
We study two clustering problems, Starforest Editing, the problem of adding
and deleting edges to obtain a disjoint union of stars, and the generalization
Bicluster Editing. We show that, in addition to being NP-hard, none of the
problems can be solved in subexponential time unless the exponential time
hypothesis fails.
Misra, Panolan, and Saurabh (MFCS 2013) argue that introducing a bound on the
number of connected components in the solution should not make the problem
easier: In particular, they argue that the subexponential time algorithm for
editing to a fixed number of clusters (p-Cluster Editing) by Fomin et al. (J.
Comput. Syst. Sci., 80(7) 2014) is an exception rather than the rule. Here, p
is a secondary parameter, bounding the number of components in the solution.
However, upon bounding the number of stars or bicliques in the solution, we
obtain algorithms which run in time for p-Starforest
Editing and for p-Bicluster Editing. We
obtain a similar result for the more general case of t-Partite p-Cluster
Editing. This is subexponential in k for fixed number of clusters, since p is
then considered a constant.
Our results even out the number of multivariate subexponential time
algorithms and give reasons to believe that this area warrants further study.Comment: Accepted for presentation at IPEC 201
Fast Biclustering by Dual Parameterization
We study two clustering problems, Starforest Editing, the problem of adding and deleting edges to obtain a disjoint union of stars, and the generalization Bicluster Editing. We show that, in addition to being NP-hard, none of the problems can be solved in subexponential time unless the exponential time hypothesis fails.
Misra, Panolan, and Saurabh (MFCS 2013) argue that introducing a bound on the number of connected components in the solution should not make the problem easier: In particular, they argue that the subexponential time algorithm for editing to a fixed number of clusters (p-Cluster Editing) by Fomin et al. (J. Comput. Syst. Sci., 80(7) 2014) is an exception rather than the rule. Here, p is a secondary parameter, bounding the number of components in the solution.
However, upon bounding the number of stars or bicliques in the solution, we obtain algorithms which run in time O(2^{3*sqrt(pk)} + n + m) for p-Starforest Editing and O(2^{O(p * sqrt(k) * log(pk))} + n + m) for p-Bicluster Editing. We obtain a similar result for the more general case of t-Partite p-Cluster Editing. This is subexponential in k for a fixed number of clusters, since p is then considered a constant.
Our results even out the number of multivariate subexponential time algorithms and give reasons to believe that this area warrants further study
Biclustering analysis of transcriptome big data identifies condition-specific microRNA targets
We present a novel approach to identify human microRNA (miRNA) regulatory modules (mRNA targets and relevant cell conditions) by biclustering a large collection of mRNA fold-change data for sequence-specific targets. Bicluster targets were assessed using validated messenger RNA (mRNA) targets and exhibited on an average 17.0% (median 19.4%) improved gain in certainty (sensitivity + specificity). The net gain was further increased up to 32.0% (median 33.4%) by incorporating functional networks of targets. We analyzed cancer-specific biclusters and found that the PI3K/Akt signaling pathway is strongly enriched with targets of a few miRNAs in breast cancer and diffuse large B-cell lymphoma. Indeed, five independent prognostic miRNAs were identified, and repression of bicluster targets and pathway activity by miR-29 was experimentally validated. In total, 29 898 biclusters for 459 human miRNAs were collected in the BiMIR database where biclusters are searchable for miRNAs, tissues, diseases, keywords and target genes
A survey on algorithmic aspects of modular decomposition
The modular decomposition is a technique that applies but is not restricted
to graphs. The notion of module naturally appears in the proofs of many graph
theoretical theorems. Computing the modular decomposition tree is an important
preprocessing step to solve a large number of combinatorial optimization
problems. Since the first polynomial time algorithm in the early 70's, the
algorithmic of the modular decomposition has known an important development.
This paper survey the ideas and techniques that arose from this line of
research
Polynomial kernels for 3-leaf power graph modification problems
A graph G=(V,E) is a 3-leaf power iff there exists a tree T whose leaves are
V and such that (u,v) is an edge iff u and v are at distance at most 3 in T.
The 3-leaf power graph edge modification problems, i.e. edition (also known as
the closest 3-leaf power), completion and edge-deletion, are FTP when
parameterized by the size of the edge set modification. However polynomial
kernel was known for none of these three problems. For each of them, we provide
cubic kernels that can be computed in linear time for each of these problems.
We thereby answer an open problem first mentioned by Dom, Guo, Huffner and
Niedermeier (2005).Comment: Submitte
- âŠ