5 research outputs found

    Design of High Performance Packet Classification Architecture for Communication Networks

    Get PDF
    Packet classification is a crucial technique for secure communication and networking. Security tools and internet services use packet classification technique which involves checking of packets against predefined rules stored in a classifier. The performance of the available software solutions of classification is not desirable and efficient for wire speed processing in high speed networks. Ternary Content Addressable Memory (TCAM), Bit-Vector (BV), field split bit vector (FSBV) and StrideBV algorithm are hardware based packet classification algorithms. In this paper, simple and memory efficient approach for packet classification has been proposed using Xnor gate instead of using lookup tables called XnorBV approach. Packet header fields of Internet protocol (IP) addresses and protocol layer are classified using Xnor gate against predefined ruleset which also support ternary bit pattern of ‘1’, ‘0’ and ‘*’ while port numbers of packet header support range match by comparing port numbers against lower bound and upper bound. The proposed parallel pipelined architecture can sustain a high throughput of +100 Gbps and low latency. The proposed method is memory efficient than other existing techniques, also supports prefix, range and exact match without use of range to prefix conversion. Also proposed XnorBV architecture is independent of ruleset feature and supports multiple dimension classification

    OpenFlow1.3 スイッチにおけるパケット分類の FPGA 実装に関する提案

    Get PDF
    Abstract—OpenFlow is a unique concept to realize softwaredefined networking (SDN). It realizes the network flexibility by separating control plane and data plane. Data plane is implemented by OpenFlow switch. We discuss how to implement OpenFlow switch on FPGA (Field-programmable gate array). FPGA implementations of the OpenFlow switch have been researched. But their architectures are for only 5-tuple fields or OpenFlow1.0. OpenFlow is dramatically updated in short time, so we should design new architecturefor newer OpenFlow specification. In this study, we propose how to implement the packet classification for OpenFlow1.3 switch on FPGA. Our idea is to use packet classification algorithm which contains a strideBV algorithm and a range bit vector encoding algorithm. We apply them to 13 fields which are required in implementations

    FPGA-based High Throughput Regular Expression Pattern Matching for Network Intrusion Detection Systems

    Get PDF
    Network speeds and bandwidths have improved over time. However, the frequency of network attacks and illegal accesses have also increased as the network speeds and bandwidths improved over time. Such attacks are capable of compromising the privacy and confidentiality of network resources belonging to even the most secure networks. Currently, general-purpose processor based software solutions used for detecting network attacks have become inadequate in coping with the current network speeds. Hardware-based platforms are designed to cope with the rising network speeds measured in several gigabits per seconds (Gbps). Such hardware-based platforms are capable of detecting several attacks at once, and a good candidate is the Field-programmable Gate Array (FPGA). The FPGA is a hardware platform that can be used to perform deep packet inspection of network packet contents at high speed. As such, this thesis focused on studying designs that were implemented with Field-programmable Gate Arrays (FPGAs). Furthermore, all the FPGA-based designs studied in this thesis have attempted to sustain a more steady growth in throughput and throughput efficiency. Throughput efficiency is defined as the concurrent throughput of a regular expression matching engine circuit divided by the average number of look up tables (LUTs) utilised by each state of the engine"s automata. The implemented FPGA-based design was built upon the concept of equivalence classification. The concept helped to reduce the overall table size of the inputs needed to drive the various Nondeterministic Finite Automata (NFA) matching engines. Compared with other approaches, the design sustained a throughput of up to 11.48 Gbps, and recorded an overall reduction in the number of pattern matching engines required by up to 75%. Also, the overall memory required by the design was reduced by about 90% when synthesised on the target FPGA platform
    corecore