11 research outputs found

    On the Complexity of Classification Functions

    Full text link
    A classification function is a multiple-valued input func-tion specified by a set of rules, where each rule is a conjunc-tion of range functions. The function is useful for packet classification for internet, network intrusion detection sys-tem, etc. This paper considers the complexity of range functions and classification functions represented by sum-of-products expressions of binary variables. It gives tighter upper bounds on the number of products for range func-tions. 1

    A High-Speed Range-Matching TCAM for Storage-Efficient Packet Classification

    Get PDF
    Abstract—A critical issue in the use of TCAMs for packet classification is how to efficiently represent rules with ranges, known as range matching. A range-matching ternary content addressable memory (RM-TCAM) including a highly functional range-matching cell (RMC) is presented in this paper. By offering various range operators, the RM-TCAM can reduce storage expansion ratio from 4.21 to 1.01 compared with conventional TCAMs, under real-world packet classification rule sets, which results in reduced power consumption and die area. A new pre-discharging match-line scheme is used to realize high-speed searching in a dynamic match-line structure. An additional charge-recycling driver further reduces the power consumption of search lines. Simulation results of a 256 64-bit range-matching TCAM, when implemented in the 0.13- m CMOS technology, achieves a 1.99-ns search time with an energy efficiency of 1.26 fJ/bit/search. While a TCAM including range encoding approach requires an additional SRAM or DRAM, the RM-TCAM can improve storage efficiency without any extra components as well as reduce the die area

    Double Mask: An efficient rule encoding for Software Defined Networking

    Get PDF
    International audiencePacket filtering is widely used in multiple networking appliances and applications, in particular, to block malicious traffic (protect network infrastructures through firewalls and intrusion detection systems) and to be deployed on routers, switches and load balancers for packet classification. This mechanism relies on the packet's header fields to filter such traffic by using range rules of IP addresses or ports. However, the set of packet filters has to handle a growing number of connected nodes and many of them are compromised and used as sources of attacks. For instance, IP filter sets available in blacklists may reach several millions of entries, and may require large memory space for their storage in filtering appliances. In this paper, we propose a new method based on a double mask IP prefix representation together with a linear transformation algorithm to build a minimized set of range rules. This representation makes the network more secure, reliable and easy to maintain and configure. We define formally the double mask representation over range rules. We show empirically that the proposed method achieves an average compression ratio of 11% on real-life blacklists and up to 74% on synthetic range rule sets. Finally, we evaluate the performance of our double masks representation through an OpenFlow based implementation with an SDN testbed using real hardware. Our results show that our technique is capable of significantly reducing the matching time in the controller when compression ratios are higher than 15% leading to a faster response time, and a good balance between matching time and memory space in the switch

    Minimizing Range Rules for Packet Filtering Using Double Mask Representation

    Get PDF
    Packet filtering is widely used in multiple networking appliances and applications, in particular, to block malicious traffic (protect network infrastructures through fire-walls and intrusion detection systems) and to be deployed on routers, switches and load balancers for packet classification. This mechanism relies on the packet's header fields to filter such traffic by using range rules of IP addresses or ports. However, the set of packet filters has to handle a growing number of connected nodes and many of them are compromised and used as sources of attacks. For instance, IP filter sets available in blacklists may reach several millions of entries, and may require large memory space for their storage in filtering appliances. In this paper, we propose a new method based on a double mask IP prefix representation together with a linear transformation algorithm to build a minimized set of range rules. We define formally the double mask representation over range rules and we prove that the number of required masks for any range is at most 2w − 4, where w is the length of a field. This representation makes the network more secure, reliable and easy to maintain and configure. We define formally the double mask representation over range rules. We show empirically that the proposed method achieves an average compression ratio of 11% on real-life blacklists and up to 74% on synthetic range rule sets.Finally, we add support of double mask into a real SDN network

    Parallelization of a software based intrusion detection system - Snort

    Get PDF
    Computer networks are already ubiquitous in people’s lives and work and network security is becoming a critical part. A simple firewall, which can only scan the bottom four OSI layers, cannot satisfy all security requirements. An intrusion detection system (IDS) with deep packet inspection, which can filter all seven OSI layers, is becoming necessary for more and more networks. However, the processing throughputs of the IDSs are far behind the current network speed. People have begun to improve the performance of the IDSs by implementing them on different hardware platforms, such as Field-Programmable Gate Array (FPGA) or some special network processors. Nevertheless, all of these options are either less flexible or more expensive to deploy. This research focuses on some possibilities of implementing a parallelized IDS on a general computer environment based on Snort, which is the most popular open-source IDS at the moment. In this thesis, some possible methods have been analyzed for the parallelization of the pattern-matching engine based on a multicore computer. However, owing to the small granularity of the network packets, the pattern-matching engine of Snort is unsuitable for parallelization. In addition, a pipelined structure of Snort has been implemented and analyzed. The universal packet capture API - LibPCAP has been modified for a new feature, which can capture a packet directly to an external buffer. Then, the performance of the pipelined Snort can have an improvement up to 60% on an Intel i7 multicore computer for jumbo frames. A primary limitation is on the memory bandwidth. With a higher bandwidth, the performance of the parallelization can be further improved

    A CONTENT-ADDRESSABLE-MEMORY ASSISTED INTRUSION PREVENTION EXPERT SYSTEM FOR GIGABIT NETWORKS

    Get PDF
    Cyber intrusions have become a serious problem with growing frequency and complexity. Current Intrusion Detection/Prevention Systems (IDS/IPS) are deficient in speed and/or accuracy. Expert systems are one functionally effective IDS/IPS method. However, they are in general computationally intensive and too slow for real time requirements. This poor performance prohibits expert system's applications in gigabit networks. This dissertation describes a novel intrusion prevention expert system architecture that utilizes the parallel search capability of Content Addressable Memory (CAM) to perform intrusion detection at gigabit/second wire speed. A CAM is a parallel search memory that compares all of its entries against input data in parallel. This parallel search is much faster than the serial search operation in Random Access Memory (RAM). The major contribution of this thesis is to accelerate the expert system's performance bottleneck "match" processes using the parallel search power of a CAM, thereby enabling the expert systems for wire speed network IDS/IPS applications. To map an expert system's Match process into a CAM, this research introduces a novel "Contextual Rule" (C-Rule) method that fundamentally changes expert systems' computational structures without changing its functionality for the IDS/IPS problem domain. This "Contextual Rule" method combines expert system rules and current network states into a new type of dynamic rule that exists only under specific network state conditions. This method converts the conventional two-database match process into a one-database search process. Therefore it enables the core functionality of the expert system to be mapped into a CAM and take advantage of its search parallelism.This thesis also introduces the CAM-Assisted Intrusion Prevention Expert System (CAIPES) architecture and shows how it can support the vast majority of the rules in the 1999 Lincoln Lab's DARPA Intrusion Detection Evaluation data set, and rules in the open source IDS "Snort". Supported rules are able to detect single-packet attacks, abusive traffic and packet flooding attacks, sequences of packets attacks, and flooding of sequences attacks. Prototyping and simulation have been performed to demonstrate the detection capability of these four types of attacks. Hardware simulation of an existing CAM shows that the CAIPES architecture enables gigabit/s IDS/IPS

    Design and Evaluation of Packet Classification Systems, Doctoral Dissertation, December 2006

    Get PDF
    Although many algorithms and architectures have been proposed, the design of efficient packet classification systems remains a challenging problem. The diversity of filter specifications, the scale of filter sets, and the throughput requirements of high speed networks all contribute to the difficulty. We need to review the algorithms from a high-level point-of-view in order to advance the study. This level of understanding can lead to significant performance improvements. In this dissertation, we evaluate several existing algorithms and present several new algorithms as well. The previous evaluation results for existing algorithms are not convincing because they have not been done in a consistent way. To resolve this issue, an objective evaluation platform needs to be developed. We implement and evaluate several representative algorithms with uniform criteria. The source code and the evaluation results are both published on a web-site to provide the research community a benchmark for impartial and thorough algorithm evaluations. We propose several new algorithms to deal with the different variations of the packet classification problem. They are: (1) the Shape Shifting Trie algorithm for longest prefix matching, used in IP lookups or as a building block for general packet classification algorithms; (2) the Fast Hash Table lookup algorithm used for exact flow match; (3) the longest prefix matching algorithm using hash tables and tries, used in IP lookups or packet classification algorithms;(4) the 2D coarse-grained tuple-space search algorithm with controlled filter expansion, used for two-dimensional packet classification or as a building block for general packet classification algorithms; (5) the Adaptive Binary Cutting algorithm used for general multi-dimensional packet classification. In addition to the algorithmic solutions, we also consider the TCAM hardware solution. In particular, we address the TCAM filter update problem for general packet classification and provide an efficient algorithm. Building upon the previous work, these algorithms significantly improve the performance of packet classification systems and set a solid foundation for further study

    Parallelization of a software based intrusion detection system - Snort

    Get PDF
    Computer networks are already ubiquitous in people’s lives and work and network security is becoming a critical part. A simple firewall, which can only scan the bottom four OSI layers, cannot satisfy all security requirements. An intrusion detection system (IDS) with deep packet inspection, which can filter all seven OSI layers, is becoming necessary for more and more networks. However, the processing throughputs of the IDSs are far behind the current network speed. People have begun to improve the performance of the IDSs by implementing them on different hardware platforms, such as Field-Programmable Gate Array (FPGA) or some special network processors. Nevertheless, all of these options are either less flexible or more expensive to deploy. This research focuses on some possibilities of implementing a parallelized IDS on a general computer environment based on Snort, which is the most popular open-source IDS at the moment. In this thesis, some possible methods have been analyzed for the parallelization of the pattern-matching engine based on a multicore computer. However, owing to the small granularity of the network packets, the pattern-matching engine of Snort is unsuitable for parallelization. In addition, a pipelined structure of Snort has been implemented and analyzed. The universal packet capture API - LibPCAP has been modified for a new feature, which can capture a packet directly to an external buffer. Then, the performance of the pipelined Snort can have an improvement up to 60% on an Intel i7 multicore computer for jumbo frames. A primary limitation is on the memory bandwidth. With a higher bandwidth, the performance of the parallelization can be further improved

    Algorithmes efficaces de gestion des règles dans les réseaux définis par logiciel

    Get PDF
    In software-defined networks (SDN), the filtering requirements for critical applications often vary according to flow changes and security policies. SDN addresses this issue with a flexible software abstraction, allowing simultaneous and convenient modification and implementation of a network policy on flow-based switches.With the increase in the number of entries in the ruleset and the size of data that traverses the network each second, it remains crucial to minimize the number of entries and accelerate the lookup process. On the other hand, attacks on Internet have reached a high level. The number keeps increasing, which increases the size of blacklists and the number of rules in firewalls. The limited storage capacity requires efficient management of that space. In the first part of this thesis, our primary goal is to find a simple representation of filtering rules that enables more compact rule tables and thus is easier to manage while keeping their semantics unchanged. The construction of rules should be obtained with reasonably efficient algorithms too. This new representation can add flexibility and efficiency in deploying security policies since the generated rules are easier to manage. A complementary approach to rule compression would be to use multiple smaller switch tables to enforce access-control policies in the network. However, most of them have a significant rules replication, or even they modify the packet's header to avoid matching a rule by a packet in the next switch. The second part of this thesis introduces new techniques to decompose and distribute filtering rule sets over a given network topology. We also introduce an update strategy to handle the changes in network policy and topology. In addition, we also exploit the structure of a series-parallel graph to efficiently resolve the rule placement problem for all-sized networks intractable time.Au sein des réseaux définis par logiciel (SDN), les exigences de filtrage pour les applications critiques varient souvent en fonction des changements de flux et des politiques de sécurité. SDN résout ce problème avec une abstraction logicielle flexible, permettant la modification et la mise en \oe{}uvre simultanées et pratiques d'une politique réseau sur les routeurs.Avec l'augmentation du nombre de règles de filtrage et la taille des données qui traversent le réseau chaque seconde, il est crucial de minimiser le nombre d'entrées et d'accélérer le processus de recherche. D'autre part, l'accroissement du nombre d'attaques sur Internet s'accompagne d'une augmentation de la taille des listes noires et du nombre de règles des pare-feux. Leur capacité de stockage limitée nécessite une gestion efficace de l'espace. Dans la première partie de cette thèse, nous proposons une représentation compacte des règles de filtrage tout en préservant leur sémantique. La construction de cette représentation est obtenue par des algorithmes raisonnablement efficaces. Cette technique permet flexibilité et efficacité dans le déploiement des politiques de sécurité puisque les règles engendrées sont plus faciles à gérer.Des approches complémentaires à la compression de règles consistent à décomposer et répartir les tables de règles, pour implémenter, par exemple, des politiques de contrôle d'accès distribué.Cependant, la plupart d'entre elles nécessitent une réplication importante de règles, voire la modification des en-têtes de paquets. La deuxième partie de cette thèse présente de nouvelles techniques pour décomposer et distribuer des ensembles de règles de filtrage sur une topologie de réseau donnée. Nous introduisons également une stratégie de mise à jour pour gérer les changements de politique et de topologie du réseau. De plus, nous exploitons également la structure de graphe série-parallèle pour résoudre efficacement le problème de placement de règles
    corecore