37,645 research outputs found

    High performance modified bit-vector based packet classification module on low-cost FPGA

    Get PDF
    The packet classification plays a significant role in many network systems, which requires the incoming packets to be categorized into different flows and must take specific actions as per functional and application requirements. The network system speed is continuously increasing, so the demand for the packet classifier also increased. Also, the packet classifier's complexity is increased further due to multiple fields should match against a large number of rules. In this manuscript, an efficient and high performance modified bitvector (MBV) based packet classification (PC) is designed and implemented on low-cost Artix-7 FPGA. The proposed MBV based PC employs pipelined architecture, which offers low latency and high throughput for PC. The MBV based PC utilizes <2% slices, operating at 493.102 MHz, and consumes 0.1 W total power on Artix-7 FPGA. The proposed PC considers only 4 clock cycles to classify the incoming packets and provides 74.95 Gbps throughput. The comparative results in terms of hardware utilization and performance efficiency of proposed work with existing similar PC approaches are analyzed with better constraints improvement

    Multi-engine packet classification hardware accelerator

    Get PDF
    As line rates increase, the task of designing high performance architectures with reduced power consumption for the processing of router traffic remains important. In this paper, we present a multi-engine packet classification hardware accelerator, which gives increased performance and reduced power consumption. It follows the basic idea of decision-tree based packet classification algorithms, such as HiCuts and HyperCuts, in which the hyperspace represented by the ruleset is recursively divided into smaller subspaces according to some heuristics. Each classification engine consists of a Trie Traverser which is responsible for finding the leaf node corresponding to the incoming packet, and a Leaf Node Searcher that reports the matching rule in the leaf node. The packet classification engine utilizes the possibility of ultra-wide memory word provided by FPGA block RAM to store the decision tree data structure, in an attempt to reduce the number of memory accesses needed for the classification. Since the clock rate of an individual engine cannot catch up to that of the internal memory, multiple classification engines are used to increase the throughput. The implementations in two different FPGAs show that this architecture can reach a searching speed of 169 million packets per second (mpps) with synthesized ACL, FW and IPC rulesets. Further analysis reveals that compared to state of the art TCAM solutions, a power savings of up to 72% and an increase in throughput of up to 27% can be achieved

    A Tunnel-aware Language for Network Packet Filtering

    Get PDF
    Abstract—While in computer networks the number of possible protocol encapsulations is growing day after day, network administrators face ever increasing difficulties in selecting accurately the traffic they need to inspect. This is mainly caused by the limited number of encapsulations supported by currently available tools and the difficulty to exactly specify which packets have to be analyzed, especially in presence of tunneled traffic. This paper presents a novel packet processing language that, besides Boolean filtering predicates, introduces special constructs for handling the more complex situations of tunneled and stacked encapsulations, giving the user a finer control over the semantics of a filtering expression. Even though this language is principally focused on packet filters, it is designed to support other advanced packet processing mechanisms such as traffic classification and field extraction. I

    Matching model of flow table for networked big data

    Full text link
    Networking for big data has to be intelligent because it will adjust data transmission requirements adaptively during data splitting and merging. Software-defined networking (SDN) provides a workable and practical paradigm for designing more efficient and flexible networks. Matching strategy in the flow table of SDN switches is most crucial. In this paper, we use a classification approach to analyze the structure of packets based on the tuple-space lookup mechanism, and propose a matching model of the flow table in SDN switches by classifying packets based on a set of fields, which is called an F-OpenFlow. The experiment results show that the proposed F-OpenFlow effectively improves the utilization rate and matching efficiency of the flow table in SDN switches for networked big data.Comment: 14 pages, 6 figures, 2 table

    Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems

    Get PDF
    This paper provides a comparative evaluation of a number of known classification algorithms that have been considered for both software and hardware implementation. Differently from other sources, the comparison has been carried out on implementations based on the same principles and design choices. Performance measurements are obtained by feeding the implemented classifiers with various traffic traces in the same test scenario. The comparison also takes into account implementation feasibility of the considered algorithms in resource constrained systems (e.g. embedded processors on special purpose network platforms). In particular, the comparison focuses on achieving a good compromise between performance, memory usage, flexibility and code portability to different target platforms

    Dataplane Specialization for High-performance OpenFlow Software Switching

    Get PDF
    OpenFlow is an amazingly expressive dataplane program- ming language, but this expressiveness comes at a severe performance price as switches must do excessive packet clas- sification in the fast path. The prevalent OpenFlow software switch architecture is therefore built on flow caching, but this imposes intricate limitations on the workloads that can be supported efficiently and may even open the door to mali- cious cache overflow attacks. In this paper we argue that in- stead of enforcing the same universal flow cache semantics to all OpenFlow applications and optimize for the common case, a switch should rather automatically specialize its dat- aplane piecemeal with respect to the configured workload. We introduce ES WITCH , a novel switch architecture that uses on-the-fly template-based code generation to compile any OpenFlow pipeline into efficient machine code, which can then be readily used as fast path. We present a proof- of-concept prototype and we demonstrate on illustrative use cases that ES WITCH yields a simpler architecture, superior packet processing speed, improved latency and CPU scala- bility, and predictable performance. Our prototype can eas- ily scale beyond 100 Gbps on a single Intel blade even with complex OpenFlow pipelines
    • 

    corecore