357 research outputs found

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines

    IP routing lookup: hardware and software approach

    Get PDF
    The work presented in this thesis is motivated by the dual goal of developing a scalable and efficient approach for IP lookup using both hardware and software approach. The work involved designing algorithms and techniques to increase the capacity and flexibility of the Internet. The Internet is comprised of routers that forward the Internet packets to the destination address and the physical links that transfer data from one router to another. The optical technologies have improved significantly over the years and hence the data link capacities have increased. However, the packet forwarding rates at the router have failed to keep up with the link capacities. Every router performs a packet-forwarding decision on the incoming packet to determine the packet??s next-hop router. This is achieved by looking up the destination address of the incoming packet in the forwarding table. Besides increased inter-packet arrival rates, the increasing routing table sizes and complexity of forwarding algorithms have made routers a bottleneck in the packet transmission across the Internet. A number of solutions have been proposed that have addressed this problem. The solutions have been categorized into hardware and software solutions. Various lookup algorithms have been proposed to tackle this problem using software approaches. These approaches have proved more scalable and practicable. However, they don??t seem to be able to catch up with the link rates. The first part of my thesis discusses one such software solution for routing lookup. The hardware approaches today have been able to match up with the link speeds. However, these solutions are unable to keep up with the increasing number of routing table entries and the power consumed. The second part of my thesis describes a hardware-based solution that provides a bound on the power consumption and reduces the number of entries required to be stored in the routing table

    Online Aggregation of the Forwarding Information Base: Accounting for Locality and Churn

    Get PDF

    An algorithm for fast route lookup and update

    Get PDF
    Increase in routing table sizes, number of updates, traffic, speed of links and migration to IPv6 have made IP address lookup, based on longest prefix matching, a major bottleneck for high performance routers. Several schemes are evaluated and compared based on complexity analysis and simulation results. A trie based scheme, called Linked List Cascade Addressable Trie (LLCAT) is presented. The strength of LLCAT comes from the fact that it is easy to be implemented in hardware, and also routing table update operations are performed incrementally requiring very few memory operations guaranteed for worst case to satisfy requirements of dynamic routing tables in high speed routers. Application of compression schemes to this algorithm is also considered to improve memory consumption and search time. The algorithm is implemented in C language and simulation results with real-life data is presented along with detailed description of the algorithm

    Rules Placement Problem in OpenFlow Networks: a Survey

    Get PDF
    International audienceSoftware-Defined Networking (SDN) abstracts low- level network functionalities to simplify network management and reduce costs. The OpenFlow protocol implements the SDN concept by abstracting network communications as flows to be processed by network elements. In OpenFlow, the high-level policies are translated into network primitives called rules that are distributed over the network. While the abstraction offered by OpenFlow allows to potentially implement any policy, it raises the new question of how to define the rules and where to place them in the network while respecting all technical and administrative requirements. In this paper, we propose a comprehensive study of the so-called OpenFlow rules placement problem with a survey of the various proposals intending to solve it. Our study is multi-fold. First, we define the problem and its challenges. Second, we overview the large number of solutions proposed, with a clear distinction between solutions focusing on memory management and those proposing to reduce signaling traffic to ensure scalability. Finally, we discuss potential research directions around the OpenFlow rules placement problem

    A Computational Approach to Packet Classification

    Full text link
    Multi-field packet classification is a crucial component in modern software-defined data center networks. To achieve high throughput and low latency, state-of-the-art algorithms strive to fit the rule lookup data structures into on-die caches; however, they do not scale well with the number of rules. We present a novel approach, NuevoMatch, which improves the memory scaling of existing methods. A new data structure, Range Query Recursive Model Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of the accesses to main memory with model inference computations. We describe an efficient training algorithm that guarantees the correctness of the RQ-RMI-based classification. The use of RQ-RMI allows the rules to be compressed into model weights that fit into the hardware cache. Further, it takes advantage of the growing support for fast neural network processing in modern CPUs, such as wide vector instructions, achieving a rate of tens of nanoseconds per lookup. Our evaluation using 500K multi-field rules from the standard ClassBench benchmark shows a geometric mean compression factor of 4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all state-of-the-art algorithms.Comment: To appear in SIGCOMM 202
    • …
    corecore