6 research outputs found

    Fast ReRoute on Programmable Switches

    Get PDF
    Highly dependable communication networks usually rely on some kind of Fast Re-Route (FRR) mechanism which allows to quickly re-route traffic upon failures, entirely in the data plane. This paper studies the design of FRR mechanisms for emerging reconfigurable switches. Our main contribution is an FRR primitive for programmable data planes, PURR, which provides low failover latency and high switch throughput, by avoiding packet recirculation. PURR tolerates multiple concurrent failures and comes with minimal memory requirements, ensuring compact forwarding tables, by unveiling an intriguing connection to classic ``string theory'' (i.e., stringology), and in particular, the shortest common supersequence problem. PURR is well-suited for high-speed match-action forwarding architectures (e.g., PISA) and supports the implementation of a broad variety of FRR mechanisms. Our simulations and prototype implementation (on an FPGA and a Tofino switch) show that PURR improves TCAM memory occupancy by a factor of 1.5x-10.8x compared to a naĂŻve encoding when implementing state-of-the-art FRR mechanisms. PURR also improves the latency and throughput of datacenter traffic up to a factor of 2.8x-5.5x and 1.2x-2x, respectively, compared to approaches based on recirculating packets

    Exploiting the power of multiplicity: a holistic survey of network-layer multipath

    Get PDF
    The Internet is inherently a multipath network: For an underlying network with only a single path, connecting various nodes would have been debilitatingly fragile. Unfortunately, traditional Internet technologies have been designed around the restrictive assumption of a single working path between a source and a destination. The lack of native multipath support constrains network performance even as the underlying network is richly connected and has redundant multiple paths. Computer networks can exploit the power of multiplicity, through which a diverse collection of paths is resource pooled as a single resource, to unlock the inherent redundancy of the Internet. This opens up a new vista of opportunities, promising increased throughput (through concurrent usage of multiple paths) and increased reliability and fault tolerance (through the use of multiple paths in backup/redundant arrangements). There are many emerging trends in networking that signify that the Internet's future will be multipath, including the use of multipath technology in data center computing; the ready availability of multiple heterogeneous radio interfaces in wireless (such as Wi-Fi and cellular) in wireless devices; ubiquity of mobile devices that are multihomed with heterogeneous access networks; and the development and standardization of multipath transport protocols such as multipath TCP. The aim of this paper is to provide a comprehensive survey of the literature on network-layer multipath solutions. We will present a detailed investigation of two important design issues, namely, the control plane problem of how to compute and select the routes and the data plane problem of how to split the flow on the computed paths. The main contribution of this paper is a systematic articulation of the main design issues in network-layer multipath routing along with a broad-ranging survey of the vast literature on network-layer multipathing. We also highlight open issues and identify directions for future work

    Software-defined datacenter network debugging

    Get PDF
    Software-defined Networking (SDN) enables flexible network management, but as networks evolve to a large number of end-points with diverse network policies, higher speed, and higher utilization, abstraction of networks by SDN makes monitoring and debugging network problems increasingly harder and challenging. While some problems impact packet processing in the data plane (e.g., congestion), some cause policy deployment failures (e.g., hardware bugs); both create inconsistency between operator intent and actual network behavior. Existing debugging tools are not sufficient to accurately detect, localize, and understand the root cause of problems observed in a large-scale networks; either they lack in-network resources (compute, memory, or/and network bandwidth) or take long time for debugging network problems. This thesis presents three debugging tools: PathDump, SwitchPointer, and Scout, and a technique for tracing packet trajectories called CherryPick. We call for a different approach to network monitoring and debugging: in contrast to implementing debugging functionality entirely in-network, we should carefully partition the debugging tasks between end-hosts and network elements. Towards this direction, we present CherryPick, PathDump, and SwitchPointer. The core of CherryPick is to cherry-pick the links that are key to representing an end-to-end path of a packet, and to embed picked linkIDs into its header on its way to destination. PathDump is an end-host based network debugger based on tracing packet trajectories, and exploits resources at the end-hosts to implement various monitoring and debugging functionalities. PathDump currently runs over a real network comprising only of commodity hardware, and yet, can support surprisingly a large class of network debugging problems with minimal in-network functionality. The key contributions of SwitchPointer is to efficiently provide network visibility to end-host based network debuggers like PathDump by using switch memory as a "directory service" — each switch, rather than storing telemetry data necessary for debugging functionalities, stores pointers to end hosts where relevant telemetry data is stored. The key design choice of thinking about memory as a directory service allows to solve performance problems that were hard or infeasible with existing designs. Finally, we present and solve a network policy fault localization problem that arises in operating policy management frameworks for a production network. We develop Scout, a fully-automated system that localizes faults in a large scale policy deployment and further pin-points the physical-level failures which are most likely cause for observed faults

    When Stuck, Flip a Coin:New Algorithms for Large-Scale Tasks

    Get PDF
    Many modern services need to routinely perform tasks on a large scale. This prompts us to consider the following question: How can we design efficient algorithms for large-scale computation? In this thesis, we focus on devising a general strategy to address the above question. Our approaches use tools from graph theory and convex optimization, and prove to be very effective on a number of problems that exhibit locality. A recurring theme in our work is to use randomization to obtain simple and practical algorithms. The techniques we developed enabled us to make progress on the following questions: - Parallel Computation of Approximately Maximum Matchings. We put forth a new approach to computing O(1)O(1)-approximate maximum matchings in the Massively Parallel Computation (MPC) model. In the regime in which the memory per machine is Θ(n)\Theta(n), i.e., linear in the size of the vertex-set, our algorithm requires only O((log⁥log⁥n)2)O((\log \log{n})^2) rounds of computations. This is an almost exponential improvement over the barrier of Ω(log⁥n)\Omega(\log {n}) rounds that all the previous results required in this regime. - Parallel Computation of Maximal Independent Sets. We propose a simple randomized algorithm that constructs maximal independent sets in the MPC model. If the memory per machine is Θ(n)\Theta(n) our algorithm runs in O(log⁥log⁥n)O(\log \log{n}) MPC-rounds. In the same regime, all the previously known algorithms required O(log⁥n)O(\log{n}) rounds of computation. - Network Routing under Link Failures. We design a new protocol for stateless message-routing in kk-connected graphs. Our routing scheme has two important features: (1) each router performs the routing decisions based only on the local information available to it; and, (2) a message is delivered successfully even if arbitrary k−1k-1 links have failed. This significantly improves upon the previous work of which the routing schemes tolerate only up to k/2−1k/2 - 1 failed links in kk-connected graphs. - Streaming Submodular Maximization under Element Removals. We study the problem of maximizing submodular functions subject to cardinality constraint kk, in the context of streaming algorithms. In a regime in which up to mm elements can be removed from the stream, we design an algorithm that provides a constant-factor approximation for this problem. At the same time, the algorithm stores only O(klog⁥2k+mlog⁥3k)O(k \log^2{k} + m \log^3{k}) elements. Our algorithm improves quadratically upon the prior work, that requires storing O(k⋅m)O(k \cdot m) many elements to solve the same problem. - Fast Recovery for the Separated Sparsity Model. In the context of compressed sensing, we put forth two recovery algorithms of nearly-linear time for the separated sparsity signals (that naturally model neural spikes). This improves upon the previous algorithm that had a quadratic running time. We also derive a refined version of the natural dynamic programming (DP) approach to the recovery of the separated sparsity signals. This DP approach leads to a recovery algorithm that runs in linear time for an important class of separated sparsity signals. Finally, we consider a generalization of these signals into two dimensions, and we show that computing an exact projection for the two-dimensional model is NP-hard
    corecore