11,845 research outputs found

    A Message Passing Algorithm for the Minimum Cost Multicut Problem

    Get PDF
    We propose a dual decomposition and linear program relaxation of the NP -hard minimum cost multicut problem. Unlike other polyhedral relaxations of the multicut polytope, it is amenable to efficient optimization by message passing. Like other polyhedral elaxations, it can be tightened efficiently by cutting planes. We define an algorithm that alternates between message passing and efficient separation of cycle- and odd-wheel inequalities. This algorithm is more efficient than state-of-the-art algorithms based on linear programming, including algorithms written in the framework of leading commercial software, as we show in experiments with large instances of the problem from applications in computer vision, biomedical image analysis and data mining.Comment: Added acknowledgment

    Optimal Distributed Covering Algorithms

    Get PDF
    We present a time-optimal deterministic distributed algorithm for approximating a minimum weight vertex cover in hypergraphs of rank f. This problem is equivalent to the Minimum Weight Set Cover problem in which the frequency of every element is bounded by f. The approximation factor of our algorithm is (f+epsilon). Let Delta denote the maximum degree in the hypergraph. Our algorithm runs in the congest model and requires O(log{Delta} / log log Delta) rounds, for constants epsilon in (0,1] and f in N^+. This is the first distributed algorithm for this problem whose running time does not depend on the vertex weights nor the number of vertices. Thus adding another member to the exclusive family of provably optimal distributed algorithms. For constant values of f and epsilon, our algorithm improves over the (f+epsilon)-approximation algorithm of [Fabian Kuhn et al., 2006] whose running time is O(log Delta + log W), where W is the ratio between the largest and smallest vertex weights in the graph. Our algorithm also achieves an f-approximation for the problem in O(f log n) rounds, improving over the classical result of [Samir Khuller et al., 1994] that achieves a running time of O(f log^2 n). Finally, for weighted vertex cover (f=2) our algorithm achieves a deterministic running time of O(log n), matching the randomized previously best result of [Koufogiannakis and Young, 2011]. We also show that integer covering-programs can be reduced to the Minimum Weight Set Cover problem in the distributed setting. This allows us to achieve an (f+epsilon)-approximate integral solution in O((1+f/log n)* ((log Delta)/(log log Delta) + (f * log M)^{1.01}* log epsilon^{-1}* (log Delta)^{0.01})) rounds, where f bounds the number of variables in a constraint, Delta bounds the number of constraints a variable appears in, and M=max {1, ceil[1/a_{min}]}, where a_{min} is the smallest normalized constraint coefficient. This improves over the results of [Fabian Kuhn et al., 2006] for the integral case, which combined with rounding achieves the same guarantees in O(epsilon^{-4}* f^4 * log f * log(M * Delta)) rounds

    Spare capacity allocation using shared backup path protection for dual link failures

    Get PDF
    This paper extends the spare capacity allocation (SCA) problem from single link failure [1] to dual link failures on mesh-like IP or WDM networks. The SCA problem pre-plans traffic flows with mutually disjoint one working and two backup paths using the shared backup path protection (SBPP) scheme. The aggregated spare provision matrix (SPM) is used to capture the spare capacity sharing for dual link failures. Comparing to a previous work by He and Somani [2], this method has better scalability and flexibility. The SCA problem is formulated in a non-linear integer programming model and partitioned into two sequential linear sub-models: one finds all primary backup paths first, and the other finds all secondary backup paths next. The results on five networks show that the network redundancy using dedicated 1+1+1 is in the range of 313-400%. It drops to 96-181% in 1:1:1 without loss of dual-link resiliency, but with the trade-off of using the complicated share capacity sharing among backup paths. The hybrid 1+1:1 provides intermediate redundancy ratio at 187-310% with a moderate complexity. We also compare the passive/active approaches which consider spare capacity sharing after/during the backup path routing process. The active sharing approaches always achieve lower redundancy values than the passive ones. These reduction percentages are about 12% for 1+1:1 and 25% for 1:1:1 respectively
    • …
    corecore