12,052 research outputs found
Computing the blocks of a quasi-median graph
Quasi-median graphs are a tool commonly used by evolutionary biologists to
visualise the evolution of molecular sequences. As with any graph, a
quasi-median graph can contain cut vertices, that is, vertices whose removal
disconnect the graph. These vertices induce a decomposition of the graph into
blocks, that is, maximal subgraphs which do not contain any cut vertices. Here
we show that the special structure of quasi-median graphs can be used to
compute their blocks without having to compute the whole graph. In particular
we present an algorithm that, for a collection of aligned sequences of
length , can compute the blocks of the associated quasi-median graph
together with the information required to correctly connect these blocks
together in run time , independent of the size of the
sequence alphabet. Our primary motivation for presenting this algorithm is the
fact that the quasi-median graph associated to a sequence alignment must
contain all most parsimonious trees for the alignment, and therefore
precomputing the blocks of the graph has the potential to help speed up any
method for computing such trees.Comment: 17 pages, 2 figure
Lagrangian Relaxation and Partial Cover
Lagrangian relaxation has been used extensively in the design of
approximation algorithms. This paper studies its strengths and limitations when
applied to Partial Cover.Comment: 20 pages, extended abstract appeared in STACS 200
Approximate Clustering via Metric Partitioning
In this paper we consider two metric covering/clustering problems -
\textit{Minimum Cost Covering Problem} (MCC) and -clustering. In the MCC
problem, we are given two point sets (clients) and (servers), and a
metric on . We would like to cover the clients by balls centered at
the servers. The objective function to minimize is the sum of the -th
power of the radii of the balls. Here is a parameter of the
problem (but not of a problem instance). MCC is closely related to the
-clustering problem. The main difference between -clustering and MCC is
that in -clustering one needs to select balls to cover the clients.
For any \eps > 0, we describe quasi-polynomial time (1 + \eps)
approximation algorithms for both of the problems. However, in case of
-clustering the algorithm uses (1 + \eps)k balls. Prior to our work, a
and a approximation were achieved by
polynomial-time algorithms for MCC and -clustering, respectively, where is an absolute constant. These two problems are thus interesting examples of
metric covering/clustering problems that admit (1 + \eps)-approximation
(using (1+\eps)k balls in case of -clustering), if one is willing to
settle for quasi-polynomial time. In contrast, for the variant of MCC where
is part of the input, we show under standard assumptions that no
polynomial time algorithm can achieve an approximation factor better than
for .Comment: 19 page
Model-based clustering for populations of networks
Until recently obtaining data on populations of networks was typically rare.
However, with the advancement of automatic monitoring devices and the growing
social and scientific interest in networks, such data has become more widely
available. From sociological experiments involving cognitive social structures
to fMRI scans revealing large-scale brain networks of groups of patients, there
is a growing awareness that we urgently need tools to analyse populations of
networks and particularly to model the variation between networks due to
covariates. We propose a model-based clustering method based on mixtures of
generalized linear (mixed) models that can be employed to describe the joint
distribution of a populations of networks in a parsimonious manner and to
identify subpopulations of networks that share certain topological properties
of interest (degree distribution, community structure, effect of covariates on
the presence of an edge, etc.). Maximum likelihood estimation for the proposed
model can be efficiently carried out with an implementation of the EM
algorithm. We assess the performance of this method on simulated data and
conclude with an example application on advice networks in a small business.Comment: The final (published) version of the article can be downloaded for
free (Open Access) from the editor's website (click on the DOI link below
Truss Decomposition in Massive Networks
The k-truss is a type of cohesive subgraphs proposed recently for the study
of networks. While the problem of computing most cohesive subgraphs is NP-hard,
there exists a polynomial time algorithm for computing k-truss. Compared with
k-core which is also efficient to compute, k-truss represents the "core" of a
k-core that keeps the key information of, while filtering out less important
information from, the k-core. However, existing algorithms for computing
k-truss are inefficient for handling today's massive networks. We first improve
the existing in-memory algorithm for computing k-truss in networks of moderate
size. Then, we propose two I/O-efficient algorithms to handle massive networks
that cannot fit in main memory. Our experiments on real datasets verify the
efficiency of our algorithms and the value of k-truss.Comment: VLDB201
- …