1,584 research outputs found

    Leveraging Programmable Data Plane For Compressing Forwarding Tables

    Get PDF
    The Forwarding Information Base (FIB) resides in the data plane of a routing device and is used to forward packets to a next-hop, based on packets\u27 destination IP addresses. The constant growth of a FIB forces network operators to spend more resources on maintaining memory with line-rate Longest Prefix Match (LPM) lookup in a FIB, namely, expensive and energy-hungry Ternary Content-Addressable Memory (TCAM) chips. In this work, we review two different approaches used to mitigate the FIB overflow problem. First, we investigate FIB aggregation, i.e., merging adjacent or overlapping routes with the same next-hop while preserving the forwarding behavior of a FIB. We propose a near-optimal algorithm, FIB Aggregation with Quick Selections (FAQS), that minimizes the FIB churn and speeds BGP update processing by more than twice. In the meantime, FAQS preserves a high compression ratio (at most 73\%). FAQS handles BGP updates incrementally, without the need of re-aggregating the entire FIB table. Second, we investigate FIB (or route) caching, when TCAM holds only a portion of a FIB that carries most of the traffic. We leverage the emerging concept of the programmable data plane to propose a Programmable FIB Caching Architecture (PFCA), that allows cache-victim selection at the line rate and significantly reduces the FIB churn compared to FIB aggregation. PFCA achieves 99.8% cache-hit ratio with only 3.3\% of the FIB placed in a FIB cache. Finally, we extend PFCA\u27s design with a novel approach of integrating incremental FIB aggregation and FIB caching. Such integration needed to overcome cache hiding challenge when a less specific prefix in a cache hides a more specific prefix in a secondary FIB table, which leads to incorrect LPM matching at the cache. In Combined FIB Caching and Aggregation (CFCA), cache-hit ratio is maximized up to 99.94% with only 2.5\% entries of the FIB, while the total number of route changes in TCAM is reduced by more than 40\% compared to low-churn FIB aggregation techniques

    Toward incremental FIB aggregation with quick selections (FAQS)

    Full text link
    Several approaches to mitigating the Forwarding Information Base (FIB) overflow problem were developed and software solutions using FIB aggregation are of particular interest. One of the greatest concerns to deploy these algorithms to real networks is their high running time and heavy computational overhead to handle thousands of FIB updates every second. In this work, we manage to use a single tree traversal to implement faster aggregation and update handling algorithm with much lower memory footprint than other existing work. We utilize 6-year realistic IPv4 and IPv6 routing tables from 2011 to 2016 to evaluate the performance of our algorithm with various metrics. To the best of our knowledge, it is the first time that IPv6 FIB aggregation has been performed. Our new solution is 2.53 and 1.75 times as fast as the-state-of-the-art FIB aggregation algorithm for IPv4 and IPv6 FIBs, respectively, while achieving a near-optimal FIB aggregation ratio

    Comparison of energy-wood and pulpwood thinning systems in young birch stands

    Get PDF
    In early thinnings, a profitable alternative to pulpwood could be to harvest whole trees as energy-wood. In theoretical analyses, we compared the extractible volumes of energy-wood and pulpwood, and their respective gross values in differently aged stands of early birch thinnings at varying intensities of removal. In a parallel field experiment, we compared the productivity at harvest of either pulpwood or energy-wood, and the profitability when the costs of harvesting and forwarding were included. The theoretical analyses showed that the proportion of the total tree biomass removed as pulpwood increased with increasing thinning intensity and stem size. The biomass volume was 1.5–1.7 times larger than the pulpwood volume for a 13.9 diameter at breast height stand and 2.0–3.5 times larger for a 10.4 diameter at breast height stand. In the field experiment, the harvested volume per hectare of energy-wood was almost twice as high as the harvest of pulpwood. The harvesting productivity (trees Productive harvesting Work Time-hour−1) was 205 in the energy-wood and 120 in the pulpwood treatment. The pulpwood treatment generated a net loss, whereas the energy-wood treatment generated a net income, the average difference being €595 ha−1. We conclude that in birch-dominated early thinning stands, at current market prices, harvesting energy-wood is more profitable than harvesting pulpwood

    A Fast Compiler for NetKAT

    Full text link
    High-level programming languages play a key role in a growing number of networking platforms, streamlining application development and enabling precise formal reasoning about network behavior. Unfortunately, current compilers only handle "local" programs that specify behavior in terms of hop-by-hop forwarding behavior, or modest extensions such as simple paths. To encode richer "global" behaviors, programmers must add extra state -- something that is tricky to get right and makes programs harder to write and maintain. Making matters worse, existing compilers can take tens of minutes to generate the forwarding state for the network, even on relatively small inputs. This forces programmers to waste time working around performance issues or even revert to using hardware-level APIs. This paper presents a new compiler for the NetKAT language that handles rich features including regular paths and virtual networks, and yet is several orders of magnitude faster than previous compilers. The compiler uses symbolic automata to calculate the extra state needed to implement "global" programs, and an intermediate representation based on binary decision diagrams to dramatically improve performance. We describe the design and implementation of three essential compiler stages: from virtual programs (which specify behavior in terms of virtual topologies) to global programs (which specify network-wide behavior in terms of physical topologies), from global programs to local programs (which specify behavior in terms of single-switch behavior), and from local programs to hardware-level forwarding tables. We present results from experiments on real-world benchmarks that quantify performance in terms of compilation time and forwarding table size

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines
    corecore