128 research outputs found
2-Vertex Connectivity in Directed Graphs
We complement our study of 2-connectivity in directed graphs, by considering
the computation of the following 2-vertex-connectivity relations: We say that
two vertices v and w are 2-vertex-connected if there are two internally
vertex-disjoint paths from v to w and two internally vertex-disjoint paths from
w to v. We also say that v and w are vertex-resilient if the removal of any
vertex different from v and w leaves v and w in the same strongly connected
component. We show how to compute the above relations in linear time so that we
can report in constant time if two vertices are 2-vertex-connected or if they
are vertex-resilient. We also show how to compute in linear time a sparse
certificate for these relations, i.e., a subgraph of the input graph that has
O(n) edges and maintains the same 2-vertex-connectivity and vertex-resilience
relations as the input graph, where n is the number of vertices.Comment: arXiv admin note: substantial text overlap with arXiv:1407.304
Totally balanced combinatorial optimization games
Combinatorial optimization games deal with cooperative games for which the value of every subset of players is obtained by solving a combinatorial optimization problem on the resources collectively owned by this subset. A solution of the game is in the core if no subset of players is able to gain advantage by breaking away from this collective decision of all players. The game is totally balanced if and only if the core is non-empty for every induced subgame of it. We study the total balancedness of several combinatorial optimization games in this paper. For a class of the partition game [5], we have a complete characterization for the total balancedness. For the packing and covering games [3], we completely clarify the relationship between the related primal/dual linear programs for the corresponding games to be totally balanced. Our work opens up the question of fully characterizing the combinatorial structures of totally balanced packing and covering games, for which we present some interesting examples: the totally balanced matching, vertex cover, and minimum coloring games.link_to_subscribed_fulltex
An Exact Algorithm for TSP in Degree-3 Graphs via Circuit Procedure and Amortization on Connectivity Structure
The paper presents an O^*(1.2312^n)-time and polynomial-space algorithm for
the traveling salesman problem in an n-vertex graph with maximum degree 3. This
improves the previous time bounds of O^*(1.251^n) by Iwama and Nakashima and
O^*(1.260^n) by Eppstein. Our algorithm is a simple branch-and-search
algorithm. The only branch rule is designed on a cut-circuit structure of a
graph induced by unprocessed edges. To improve a time bound by a simple
analysis on measure and conquer, we introduce an amortization scheme over the
cut-circuit structure by defining the measure of an instance to be the sum of
not only weights of vertices but also weights of connected components of the
induced graph.Comment: 24 pages and 4 figure
Finding 2-Edge and 2-Vertex Strongly Connected Components in Quadratic Time
We present faster algorithms for computing the 2-edge and 2-vertex strongly
connected components of a directed graph, which are straightforward
generalizations of strongly connected components. While in undirected graphs
the 2-edge and 2-vertex connected components can be found in linear time, in
directed graphs only rather simple -time algorithms were known. We use
a hierarchical sparsification technique to obtain algorithms that run in time
. For 2-edge strongly connected components our algorithm gives the
first running time improvement in 20 years. Additionally we present an -time algorithm for 2-edge strongly connected components, and thus
improve over the running time also when . Our approach
extends to k-edge and k-vertex strongly connected components for any constant k
with a running time of for edges and for vertices
Hierarchies of Predominantly Connected Communities
We consider communities whose vertices are predominantly connected, i.e., the
vertices in each community are stronger connected to other community members of
the same community than to vertices outside the community. Flake et al.
introduced a hierarchical clustering algorithm that finds such predominantly
connected communities of different coarseness depending on an input parameter.
We present a simple and efficient method for constructing a clustering
hierarchy according to Flake et al. that supersedes the necessity of choosing
feasible parameter values and guarantees the completeness of the resulting
hierarchy, i.e., the hierarchy contains all clusterings that can be constructed
by the original algorithm for any parameter value. However, predominantly
connected communities are not organized in a single hierarchy. Thus, we develop
a framework that, after precomputing at most maximum flows, admits a
linear time construction of a clustering \C(S) of predominantly connected
communities that contains a given community and is maximum in the sense
that any further clustering of predominantly connected communities that also
contains is hierarchically nested in \C(S). We further generalize this
construction yielding a clustering with similar properties for given
communities in time. This admits the analysis of a network's structure
with respect to various communities in different hierarchies.Comment: to appear (WADS 2013
Cubic Augmentation of Planar Graphs
In this paper we study the problem of augmenting a planar graph such that it
becomes 3-regular and remains planar. We show that it is NP-hard to decide
whether such an augmentation exists. On the other hand, we give an efficient
algorithm for the variant of the problem where the input graph has a fixed
planar (topological) embedding that has to be preserved by the augmentation. We
further generalize this algorithm to test efficiently whether a 3-regular
planar augmentation exists that additionally makes the input graph connected or
biconnected. If the input graph should become even triconnected, we show that
the existence of a 3-regular planar augmentation is again NP-hard to decide.Comment: accepted at ISAAC 201
Distributed Minimum Cut Approximation
We study the problem of computing approximate minimum edge cuts by
distributed algorithms. We use a standard synchronous message passing model
where in each round, bits can be transmitted over each edge (a.k.a.
the CONGEST model). We present a distributed algorithm that, for any weighted
graph and any , with high probability finds a cut of size
at most in
rounds, where is the size of the minimum cut. This algorithm is based
on a simple approach for analyzing random edge sampling, which we call the
random layering technique. In addition, we also present another distributed
algorithm, which is based on a centralized algorithm due to Matula [SODA '93],
that with high probability computes a cut of size at most
in rounds for any .
The time complexities of both of these algorithms almost match the
lower bound of Das Sarma et al. [STOC '11], thus
leading to an answer to an open question raised by Elkin [SIGACT-News '04] and
Das Sarma et al. [STOC '11].
Furthermore, we also strengthen the lower bound of Das Sarma et al. by
extending it to unweighted graphs. We show that the same lower bound also holds
for unweighted multigraphs (or equivalently for weighted graphs in which
bits can be transmitted in each round over an edge of weight ),
even if the diameter is . For unweighted simple graphs, we show
that even for networks of diameter , finding an -approximate minimum cut
in networks of edge connectivity or computing an
-approximation of the edge connectivity requires rounds
Almost-Tight Distributed Minimum Cut Algorithms
We study the problem of computing the minimum cut in a weighted distributed
message-passing networks (the CONGEST model). Let be the minimum cut,
be the number of nodes in the network, and be the network diameter. Our
algorithm can compute exactly in time. To the best of our knowledge, this is the first paper that
explicitly studies computing the exact minimum cut in the distributed setting.
Previously, non-trivial sublinear time algorithms for this problem are known
only for unweighted graphs when due to Pritchard and
Thurimella's -time and -time algorithms for
computing -edge-connected and -edge-connected components.
By using the edge sampling technique of Karger's, we can convert this
algorithm into a -approximation -time algorithm for any . This improves
over the previous -approximation -time algorithm and
-approximation -time algorithm of Ghaffari and Kuhn. Due to the lower
bound of by Das Sarma et al. which holds for any
approximation algorithm, this running time is tight up to a factor.
To get the stated running time, we developed an approximation algorithm which
combines the ideas of Thorup's algorithm and Matula's contraction algorithm. It
saves an factor as compared to applying Thorup's tree
packing theorem directly. Then, we combine Kutten and Peleg's tree partitioning
algorithm and Karger's dynamic programming to achieve an efficient distributed
algorithm that finds the minimum cut when we are given a spanning tree that
crosses the minimum cut exactly once
On the Maximum Crossing Number
Research about crossings is typically about minimization. In this paper, we
consider \emph{maximizing} the number of crossings over all possible ways to
draw a given graph in the plane. Alpert et al. [Electron. J. Combin., 2009]
conjectured that any graph has a \emph{convex} straight-line drawing, e.g., a
drawing with vertices in convex position, that maximizes the number of edge
crossings. We disprove this conjecture by constructing a planar graph on twelve
vertices that allows a non-convex drawing with more crossings than any convex
one. Bald et al. [Proc. COCOON, 2016] showed that it is NP-hard to compute the
maximum number of crossings of a geometric graph and that the weighted
geometric case is NP-hard to approximate. We strengthen these results by
showing hardness of approximation even for the unweighted geometric case and
prove that the unweighted topological case is NP-hard.Comment: 16 pages, 5 figure
A New Integer Linear Programming Formulation to the Inverse QSAR/QSPR for Acyclic Chemical Compounds Using Skeleton Trees
33rd International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2020, Kitakyushu, Japan, September 22-25, 2020.Computer-aided drug design is one of important application areas of intelligent systems. Recently a novel method has been proposed for inverse QSAR/QSPR using both artificial neural networks (ANN) and mixed integer linear programming (MILP), where inverse QSAR/QSPR is a major approach for drug design. This method consists of two phases: In the first phase, a feature function f is defined so that each chemical compound G is converted into a vector f(G) of several descriptors of G, and a prediction function ψ is constructed with an ANN so that ψ(f(G)) takes a value nearly equal to a given chemical property π for many chemical compounds G in a data set. In the second phase, given a target value y∗ of the chemical property π , a chemical structure G∗ is inferred in the following way. An MILP M is formulated so that M admits a feasible solution (x∗, y∗) if and only if there exist vectors x∗, y∗ and a chemical compound G∗ such that ψ(x∗)=y∗ and f(G∗)=x∗. The method has been implemented for inferring acyclic chemical compounds. In this paper, we propose a new MILP for inferring acyclic chemical compounds by introducing a novel concept, skeleton tree, and conducted computational experiments. The results suggest that the proposed method outperforms the existing method when the diameter of graphs is up to around 6 to 8. For an instance for inferring acyclic chemical compounds with 38 non-hydrogen atoms from C, O and S and diameter 6, our method was 5×104 times faster
- …