6 research outputs found

    Distributed Consistent Network Updates in SDNs: Local Verification for Global Guarantees

    Full text link
    While SDNs enable more flexible and adaptive network operations, (logically) centralized reconfigurations introduce overheads and delays, which can limit network reactivity. This paper initiates the study of a more distributed approach, in which the consistent network updates are implemented by the switches and routers directly in the data plane. In particular, our approach leverages concepts from local proof labeling systems, which allows the data plane elements to locally check network properties, and we show that this is sufficient to obtain global network guarantees. We demonstrate our approach considering three fundamental use cases, and analyze its benefits in terms of performance and fault-tolerance.Comment: Appears in IEEE NCA 201

    Fast Edge Splitting and Edmonds’ Arborescence Construction for Unweighted Graphs

    No full text
    Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and indegree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc 2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc 2 + m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S | it improves the current best randomized bounds due to Benczúr and Karger [2], who split-off a single vertex in an undirected graph i

    Fast edge splitting and Edmonds' arborescence construction for unweighted graphs

    No full text
    Given an unweighted undirected or directed graph with n vertices, m edges and edge connectivity c, we present a new deterministic algorithm for edge splitting. Our algorithm splits-off any specified subset S of vertices satisfying standard conditions (even degree for the undirected case and in-degree ≥ out-degree for the directed case) while maintaining connectivity c for vertices outside S in Õ(m+nc2) time for an undirected graph and Õ(mc) time for a directed graph. This improves the current best deterministic time bounds due to Gabow [8], who splits-off a single vertex in Õ(nc2+m) time for an undirected graph and Õ(mc) time for a directed graph. Further, for appropriate ranges of n, c, |S| it improves the current best randomized bounds due to Benczur and Karger [2], who split-off a single vertex in an undirected graph in Õ(n2) Monte Carlo time. We give two applications of our edge splitting algorithms. Our first application is a sub-quadratic (in n) algorithm to construct Edmonds' arborescences. A classical result of Edmonds [5] shows that an unweighted directed graph with c edge-disjoint paths from any particular vertex r to every other vertex has exactly c edge-disjoint arborescences rooted at r. For a c edge connected unweighted undirected graph, the same theorem holds on the digraph obtained by replacing each undirected edge by two directed edges, one in each direction. The current fastest construction of these arborescences by Gabow [7] takes Õ(n2c2) time. Our algorithm takes Õ(nc3+m) time for the undirected case and Õ(nc4+mc) time for the directed case. The second application of our splitting algorithm is a new Steiner edge connectivity algorithm for undirected graphs which matches the best known bound of Õ(nc2+m) time due to Bhalgat et al [3]. Finally, our algorithm can also be viewed as an alternative proof for existential edge splitting theorems due to Lovasz [9] and Mader [11]

    When Stuck, Flip a Coin:New Algorithms for Large-Scale Tasks

    Get PDF
    Many modern services need to routinely perform tasks on a large scale. This prompts us to consider the following question: How can we design efficient algorithms for large-scale computation? In this thesis, we focus on devising a general strategy to address the above question. Our approaches use tools from graph theory and convex optimization, and prove to be very effective on a number of problems that exhibit locality. A recurring theme in our work is to use randomization to obtain simple and practical algorithms. The techniques we developed enabled us to make progress on the following questions: - Parallel Computation of Approximately Maximum Matchings. We put forth a new approach to computing O(1)O(1)-approximate maximum matchings in the Massively Parallel Computation (MPC) model. In the regime in which the memory per machine is Θ(n)\Theta(n), i.e., linear in the size of the vertex-set, our algorithm requires only O((loglogn)2)O((\log \log{n})^2) rounds of computations. This is an almost exponential improvement over the barrier of Ω(logn)\Omega(\log {n}) rounds that all the previous results required in this regime. - Parallel Computation of Maximal Independent Sets. We propose a simple randomized algorithm that constructs maximal independent sets in the MPC model. If the memory per machine is Θ(n)\Theta(n) our algorithm runs in O(loglogn)O(\log \log{n}) MPC-rounds. In the same regime, all the previously known algorithms required O(logn)O(\log{n}) rounds of computation. - Network Routing under Link Failures. We design a new protocol for stateless message-routing in kk-connected graphs. Our routing scheme has two important features: (1) each router performs the routing decisions based only on the local information available to it; and, (2) a message is delivered successfully even if arbitrary k1k-1 links have failed. This significantly improves upon the previous work of which the routing schemes tolerate only up to k/21k/2 - 1 failed links in kk-connected graphs. - Streaming Submodular Maximization under Element Removals. We study the problem of maximizing submodular functions subject to cardinality constraint kk, in the context of streaming algorithms. In a regime in which up to mm elements can be removed from the stream, we design an algorithm that provides a constant-factor approximation for this problem. At the same time, the algorithm stores only O(klog2k+mlog3k)O(k \log^2{k} + m \log^3{k}) elements. Our algorithm improves quadratically upon the prior work, that requires storing O(km)O(k \cdot m) many elements to solve the same problem. - Fast Recovery for the Separated Sparsity Model. In the context of compressed sensing, we put forth two recovery algorithms of nearly-linear time for the separated sparsity signals (that naturally model neural spikes). This improves upon the previous algorithm that had a quadratic running time. We also derive a refined version of the natural dynamic programming (DP) approach to the recovery of the separated sparsity signals. This DP approach leads to a recovery algorithm that runs in linear time for an important class of separated sparsity signals. Finally, we consider a generalization of these signals into two dimensions, and we show that computing an exact projection for the two-dimensional model is NP-hard
    corecore