1,027 research outputs found

    Packet Inspection on Programmable Hardware

    Get PDF
    In the network security system, one of the issues that are being discussed is to conduct a quick inspection of all incoming and outgoing packet. In this paper, we make a design packet inspection systems using programmable hardware. We propose the packet inspection system using a Field Programmable Gate Array (FPGA). The system proposed consisting of two important parts. The first part is to scanning packet very fast and the second is for verifying the results of scanning the first part. On the first part, the system based on incoming packet contents, the packet can reduce the number of strings to be matched for each packet and, accordingly, feed the packet to a verifier in the second part to conduct accurate string matching. In this paper a novel multi-threading finite state machine is proposed, which improves the clock frequency and allows multiple packets to be examined by a single state machine simultaneously. Design techniques for high-speed interconnect and interface circuits are also presented. The results of our experiment show that the system performance depend on the string matching algorithm, design on FPGA, and the number of string to be matched. Keywords: Packet inspection, string matching, Field programmable gate array, Traffic classificatio

    Scalable NIDS via Negative Pattern Matching and Exclusive Pattern Matching

    Full text link
    In this paper, we identify the unique challenges in deploying parallelism on TCAM-based pattern matching for Network Intrusion Detection Systems (NIDSes). We resolve two critical issues when designing scalable parallelism specifically for pattern matching modules: 1) how to enable fine-grained parallelism in pursuit of effective load balancing and desirable speedup simultaneously; and 2) how to reconcile the tension between parallel processing speedup and prohibitive TCAM power consumption. To this end, we first propose the novel concept of Negative Pattern Matching to partition flows, by which the number of TCAM lookups can be significantly reduced, and the resulting (fine-grained) flow segments can be inspected in parallel without incurring false negatives. Then we propose the notion of Exclusive Pattern Matching to divide the entire pattern set into multiple subsets which can later be matched against selectively and independently without affecting the correctness. We show that Exclusive Pattern Matching enables the adoption of smaller and faster TCAM blocks and improves both the pattern matching speed and scalability. Finally, our theoretical and experimental results validate that the above two concepts are inherently complementary, enabling our integrated scheme to provide performance gain in any scenario (with either clean or dirty traffic).Department of ComputingRefereed conference pape

    Techniques for efficient regular expression matching across hardware architectures

    Get PDF
    Regular expression matching is a central task for many networking and bioinformatics applications. For example, network intrusion detection systems, which perform deep packet inspection to detect malicious network activities, often encode signatures of malicious traffic through regular expressions. Similarly, several bioinformatics applications perform regular expression matching to find common patterns, called motifs, across multiple gene or protein sequences. Hardware implementations of regular expression matching engines fall into two categories: memory-based and logic-based solutions. In both cases, the design aims to maximize the processing throughput and minimize the resources requirements, either in terms of memory or of logic cells. Graphical Processing Units (GPUs) offer a highly parallel platform for memory-based implementations, while Field Programmable Gate Arrays (FPGAs) support reconfigurable, logic-based solutions. In addition, Micron Technology has recently announced its Automata Processor, a memory-based, reprogrammable hardware device. From an algorithmic standpoint, regular expression matching engines are based on finite automata, either in their non-deterministic or in their deterministic form (NFA and DFA, respectively). Micron's Automata Processor is based on a proprietary Automata Network, which extends classical NFA with counters and boolean elements. In this work, we aim to implement highly parallel memory-based and logic-based regular expression matching solutions. Our contributions are summarized as follows. First, we implemented regular expression matching on GPU. In this process, we explored compression techniques and regular expression clustering algorithms to alleviate the memory pressure of DFA-based GPU implementations. Second, we developed a parser for Automata Networks defined through Micron's Automata Network Markup Language (ANML), a XML-based high-level language designed to program the Automata Processor. Specifically, our ANML parser first maps the Automata Networks to an

    A MEMORY EFFICIENT HARDWARE BASED PATTERN MATCHING AND PROTEIN ALIGNMENT SCHEMES FOR HIGHLY COMPLEX DATABASES

    Get PDF
    Protein sequence alignment to find correlation between different species, or genetic mutations etc. is the most computational intensive task when performing protein comparison. To speed-up the alignment, Systolic Arrays (SAs) have been used. In order to avoid the internal-loop problem which reduces the performance, pipeline interleaving strategy has been presented. This strategy is applied to an SA for Smith Waterman (SW) algorithm which is an alignment algorithm to locally align two proteins. In the proposed system, the above methodology has been extended to implement a memory efficient FPGA-hardware based Network Intrusion Detection System (NIDS) to speed up network processing. The pattern matching in Intrusion Detection Systems (IDS) is done using SNORT to find the pattern of intrusions. A Finite State Machine (FSM) based Processing Elements (PE) unit to achieve minimum number of states for pattern matching and bit wise early intrusion detection to increase the throughput by pipelining is presented

    MULTI-GIGABIT PATTERN FOR DATA IN NETWORK SECURITY

    Get PDF
    In the current scenario network security is emerging the world. Matching large sets of patterns against an incoming stream of data is a fundamental task in several fields such as network security or computational biology. High-speed network intrusion detection systems (IDS) rely on efficient pattern matching techniques to analyze the packet payload and make decisions on the significance of the packet body. However, matching the streaming payload bytes against thousands of patterns at multi-gigabit rates is computationally intensive. Various techniques have been proposed in past but the performance of the system is reducing because of multi-gigabit rates.Pattern matching is a significant issue in intrusion detection systems, but by no means the only one. Handling multi-content rules, reordering, and reassembling incoming packets are also significant for system performance. We present two pattern matching techniques to compare incoming packets against intrusion detection search patterns. The first approach, decoded partial CAM (DpCAM), pre-decodes incoming characters, aligns the decoded data, and performs logical AND on them to produce the match signal for each pattern. The second approach, perfect hashing memory (PHmem), uses perfect hashing to determine a unique memory location that contains the search pattern and a comparison between incoming data and memory output to determine the match. The suggested methods have implemented in vhdl coding and we use Xilinx for synthesis

    Algorithms and Architectures for Network Search Processors

    Get PDF
    The continuous growth in the Internetā€™s size, the amount of data traļ¬ƒc, and the complexity of processing this traļ¬ƒc gives rise to new challenges in building high-performance network devices. One of the most fundamental tasks performed by these devices is searching the network data for predeļ¬ned keys. Address lookup, packet classiļ¬cation, and deep packet inspection are some of the operations which involve table lookups and searching. These operations are typically part of the packet forwarding mechanism, and can create a performance bottleneck. Therefore, fast and resource eļ¬ƒcient algorithms are required. One of the most commonly used techniques for such searching operations is the Ternary Content Addressable Memory (TCAM). While TCAM can oļ¬€er very fast search speeds, it is costly and consumes a large amount of power. Hence, designing cost-eļ¬€ective, power-eļ¬ƒcient, and high-speed search techniques has received a great deal of attention in the research and industrial community. In this thesis, we propose a generic search technique based on Bloom ļ¬lters. A Bloom ļ¬lter is a randomized data structure used to represent a set of bit-strings compactly and support set membership queries. We demonstrate techniques to convert the search process into table lookups. The resulting table data structures are kept in the oļ¬€-chip memory and their Bloom ļ¬lter representations are kept in the on-chip memory. An item needs to be looked up in the oļ¬€-chip table only when it is found in the on-chip Bloom ļ¬lters. By ļ¬ltering the oļ¬€-chip memory accesses in this fashion, the search operations can be signiļ¬cantly accelerated. Our approach involves a unique combination of algorithmic and architectural techniques that outperform some of the current techniques in terms of cost-eļ¬€ectiveness, speed, and power-eļ¬ƒciency
    • ā€¦
    corecore