3 research outputs found

    On the Complexity of Compressing Two Dimensional Routing Tables with Order

    Get PDF
    International audienceMotivated by routing in telecommunication network using Software Defined Network (SDN) technologies, we consider the following problem of finding short routing lists using aggregation rules. We are given a set of communications X , which are distinct pairs (s, t) ⊆ S × T , (typically S is the set of sources and T the set of destinations), and a port function π : X → P where P is the set of ports. A routing list R is an ordered list of triples which are of the form (s, t, p), If r(s, t) = π(s, t), then we say that (s, t) is properly routed by R and if all communications of X are properly routed, we say that R emulates (X , π). The aim is to find a shortest routing list emulating (X , π). In this paper, we carry out a study of the complexity of the two dual decision problems associated to it. Given a set of communication X , a port function π and an integer k, the A preliminary short version of this work has appeared in [7]. 2 Frédéric Giroire et al. first one called Routing List (resp. the second one, called List Reduction) consists in deciding whether there is a routing list emulating (X , π) of size at most k (resp. |X | − k). We prove that both problems are NP-complete. We then give a 3-approximation for List Reduction, which can be generalized to higher dimensions. We also give a 4-approximation for Routing List in the fundamental case when there are only two ports (i.e. |P | = 2), X = S × T and |S| = |T |

    Leveraging Programmable Data Plane For Compressing Forwarding Tables

    Get PDF
    The Forwarding Information Base (FIB) resides in the data plane of a routing device and is used to forward packets to a next-hop, based on packets\u27 destination IP addresses. The constant growth of a FIB forces network operators to spend more resources on maintaining memory with line-rate Longest Prefix Match (LPM) lookup in a FIB, namely, expensive and energy-hungry Ternary Content-Addressable Memory (TCAM) chips. In this work, we review two different approaches used to mitigate the FIB overflow problem. First, we investigate FIB aggregation, i.e., merging adjacent or overlapping routes with the same next-hop while preserving the forwarding behavior of a FIB. We propose a near-optimal algorithm, FIB Aggregation with Quick Selections (FAQS), that minimizes the FIB churn and speeds BGP update processing by more than twice. In the meantime, FAQS preserves a high compression ratio (at most 73\%). FAQS handles BGP updates incrementally, without the need of re-aggregating the entire FIB table. Second, we investigate FIB (or route) caching, when TCAM holds only a portion of a FIB that carries most of the traffic. We leverage the emerging concept of the programmable data plane to propose a Programmable FIB Caching Architecture (PFCA), that allows cache-victim selection at the line rate and significantly reduces the FIB churn compared to FIB aggregation. PFCA achieves 99.8% cache-hit ratio with only 3.3\% of the FIB placed in a FIB cache. Finally, we extend PFCA\u27s design with a novel approach of integrating incremental FIB aggregation and FIB caching. Such integration needed to overcome cache hiding challenge when a less specific prefix in a cache hides a more specific prefix in a secondary FIB table, which leads to incorrect LPM matching at the cache. In Combined FIB Caching and Aggregation (CFCA), cache-hit ratio is maximized up to 99.94% with only 2.5\% entries of the FIB, while the total number of route changes in TCAM is reduced by more than 40\% compared to low-churn FIB aggregation techniques

    Load Balance and Resource Efficiency in Communication Networks

    Get PDF
    Network management is critical for today’s network. This study investigates both load balancing and resource efficiency in network management. For load balancing, one unfavorable situation is that the active traffic uses a portion of the equal-cost paths instead of all. The root causes of load imbalance are not easily identified and located by network operators. Most research work related in this area concerns the design of load balancing mechanisms or network-wide troubleshooting that does not specify the causes of load imbalance. In this study, we describe a computational framework based on network measurements to identify the correlation mechanism causing the load imbalance. We also describe a novel framework based on Coprime to mitigate the load imbalance brought by hash correlations. In evaluation based on real network trace data and topologies, we have proved that we can reduces the error (CV or K-S statistic) by at least one magnitude. For resource efficiency, today’s network demands an increasing switch memory to support the essential functions, such as forwarding, monitoring, etc. However, the cache memory is restricted when processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes (typically just one). This study introduces a new single-pass reservoir weighted-sampling stream aggregation algorithm, Priority-Based Aggregation (PBA). A naive approach to order sample regardless of key then aggregate the results is hopelessly inefficient. In distinction, our proposed algorithm uses a single persistent random variable across the lifetime of each key in the cache and maintains unbiased estimates of the key aggregates that can be queried at any point in the stream. Concerning statistical properties, we prove that PBA provides unbiased estimates of the true aggregates. We analyze the computational complexity of PBA and its variants and provide a detailed evaluation of its accuracy on synthetic and trace data. In addition to sampling, this study also considers placing classification rules into switches from various network functions. While much work has focused on compressing the rules, most of this work proposes solutions operating in the memory of a single switch. Instead, this study proposed a collaborative approach encompassing switches and network functions. This architecture enables trade-off between usage of (expensive) switch memory and (cheaper) downstream network bandwidth and network function resources. Our system can reduce memory usage significantly compared to strawman approaches as demonstrated with extensive simulations and prototype evaluation with real traffic traces and rules
    corecore