2,309 research outputs found
Structure of conflict graphs in constraint alignment problems and algorithms
We consider the constrained graph alignment problem which has applications in
biological network analysis. Given two input graphs , a pair of vertex mappings induces an {\it edge conservation} if
the vertex pairs are adjacent in their respective graphs. %In general terms The
goal is to provide a one-to-one mapping between the vertices of the input
graphs in order to maximize edge conservation. However the allowed mappings are
restricted since each vertex from (resp. ) is allowed to be mapped
to at most (resp. ) specified vertices in (resp. ). Most
of results in this paper deal with the case which attracted most
attention in the related literature. We formulate the problem as a maximum
independent set problem in a related {\em conflict graph} and investigate
structural properties of this graph in terms of forbidden subgraphs. We are
interested, in particular, in excluding certain wheals, fans, cliques or claws
(all terms are defined in the paper), which corresponds in excluding certain
cycles, paths, cliques or independent sets in the neighborhood of each vertex.
Then, we investigate algorithmic consequences of some of these properties,
which illustrates the potential of this approach and raises new horizons for
further works. In particular this approach allows us to reinterpret a known
polynomial case in terms of conflict graph and to improve known approximation
and fixed-parameter tractability results through efficiently solving the
maximum independent set problem in conflict graphs. Some of our new
approximation results involve approximation ratios that are function of the
optimal value, in particular its square root; this kind of results cannot be
achieved for maximum independent set in general graphs.Comment: 22 pages, 6 figure
Transiently Consistent SDN Updates: Being Greedy is Hard
The software-defined networking paradigm introduces interesting opportunities
to operate networks in a more flexible, optimized, yet formally verifiable
manner. Despite the logically centralized control, however, a Software-Defined
Network (SDN) is still a distributed system, with inherent delays between the
switches and the controller. Especially the problem of changing network
configurations in a consistent manner, also known as the consistent network
update problem, has received much attention over the last years. In particular,
it has been shown that there exists an inherent tradeoff between update
consistency and speed. This paper revisits the problem of updating an SDN in a
transiently consistent, loop-free manner. First, we rigorously prove that
computing a maximum (greedy) loop-free network update is generally NP-hard;
this result has implications for the classic maximum acyclic subgraph problem
(the dual feedback arc set problem) as well. Second, we show that for special
problem instances, fast and good approximation algorithms exist
Hardness of Vertex Deletion and Project Scheduling
Assuming the Unique Games Conjecture, we show strong inapproximability
results for two natural vertex deletion problems on directed graphs: for any
integer and arbitrary small , the Feedback Vertex Set
problem and the DAG Vertex Deletion problem are inapproximable within a factor
even on graphs where the vertices can be almost partitioned into
solutions. This gives a more structured and therefore stronger UGC-based
hardness result for the Feedback Vertex Set problem that is also simpler
(albeit using the "It Ain't Over Till It's Over" theorem) than the previous
hardness result.
In comparison to the classical Feedback Vertex Set problem, the DAG Vertex
Deletion problem has received little attention and, although we think it is a
natural and interesting problem, the main motivation for our inapproximability
result stems from its relationship with the classical Discrete Time-Cost
Tradeoff Problem. More specifically, our results imply that the deadline
version is NP-hard to approximate within any constant assuming the Unique Games
Conjecture. This explains the difficulty in obtaining good approximation
algorithms for that problem and further motivates previous alternative
approaches such as bicriteria approximations.Comment: 18 pages, 1 figur
Approximating the Minimum Equivalent Digraph
The MEG (minimum equivalent graph) problem is, given a directed graph, to
find a small subset of the edges that maintains all reachability relations
between nodes. The problem is NP-hard. This paper gives an approximation
algorithm with performance guarantee of pi^2/6 ~ 1.64. The algorithm and its
analysis are based on the simple idea of contracting long cycles. (This result
is strengthened slightly in ``On strongly connected digraphs with bounded cycle
length'' (1996).) The analysis applies directly to 2-Exchange, a simple ``local
improvement'' algorithm, showing that its performance guarantee is 1.75.Comment: conference version in ACM-SIAM Symposium on Discrete Algorithms
(1994
Streaming Hardness of Unique Games
We study the problem of approximating the value of a Unique Game instance in the streaming model. A simple count of the number of constraints divided by p, the alphabet size of the Unique Game, gives a trivial p-approximation that can be computed in O(log n) space. Meanwhile, with high probability, a sample of O~(n) constraints suffices to estimate the optimal value to (1+epsilon) accuracy. We prove that any single-pass streaming algorithm that achieves a (p-epsilon)-approximation requires Omega_epsilon(sqrt n) space. Our proof is via a reduction from lower bounds for a communication problem that is a p-ary variant of the Boolean Hidden Matching problem studied in the literature. Given the utility of Unique Games as a starting point for reduction to other optimization problems, our strong hardness for approximating Unique Games could lead to downstream hardness results for streaming approximability for other CSP-like problems
The Lazy Flipper: MAP Inference in Higher-Order Graphical Models by Depth-limited Exhaustive Search
This article presents a new search algorithm for the NP-hard problem of
optimizing functions of binary variables that decompose according to a
graphical model. It can be applied to models of any order and structure. The
main novelty is a technique to constrain the search space based on the topology
of the model. When pursued to the full search depth, the algorithm is
guaranteed to converge to a global optimum, passing through a series of
monotonously improving local optima that are guaranteed to be optimal within a
given and increasing Hamming distance. For a search depth of 1, it specializes
to Iterated Conditional Modes. Between these extremes, a useful tradeoff
between approximation quality and runtime is established. Experiments on models
derived from both illustrative and real problems show that approximations found
with limited search depth match or improve those obtained by state-of-the-art
methods based on message passing and linear programming.Comment: C++ Source Code available from
http://hci.iwr.uni-heidelberg.de/software.ph
- …