11 research outputs found

    Partitioning Regular Polygons into Circular Pieces I: Convex Partitions

    Get PDF
    We explore an instance of the question of partitioning a polygon into pieces, each of which is as ``circular'' as possible, in the sense of having an aspect ratio close to 1. The aspect ratio of a polygon is the ratio of the diameters of the smallest circumscribing circle to the largest inscribed disk. The problem is rich even for partitioning regular polygons into convex pieces, the focus of this paper. We show that the optimal (most circular) partition for an equilateral triangle has an infinite number of pieces, with the lower bound approachable to any accuracy desired by a particular finite partition. For pentagons and all regular k-gons, k > 5, the unpartitioned polygon is already optimal. The square presents an interesting intermediate case. Here the one-piece partition is not optimal, but nor is the trivial lower bound approachable. We narrow the optimal ratio to an aspect-ratio gap of 0.01082 with several somewhat intricate partitions.Comment: 21 pages, 25 figure

    [[alternative]]The Design and Implementation of IPv6 Flow Classification Algorithms for High-Speed IPv6 Switches (II)

    Get PDF
    計畫編號:NSC91-2219-E032-001研究期間:200208~200307研究經費:815,000[[sponsorship]]行政院國家科學委員

    Packet Classification Algorithms: From Theory to Practice

    Full text link
    Abstract—During the past decade, the packet classification problem has been widely studied to accelerate network applications such as access control, traffic engineering and intrusion detection. In our research, we found that although a great number of packet classification algorithms have been proposed in recent years, unfortunately most of them stagnate in mathematical analysis or software simulation stages and few of them have been implemented in commercial products as a generic solution. To fill the gap between theory and practice, in this paper, we propose a novel packet classification algorithm named HyperSplit. Compared to the well-known HiCuts and HSM algorithms, HyperSplit achieves superior performance in terms of classification speed, memory usage and preprocessing time. The practicability of the proposed algorithm is manifested by two facts in our test: HyperSplit is the only algorithm that can successfully handle all the rule sets; HyperSplit is also the only algorithm that reaches more than 6Gbps throughput on the Octeon3860 multi-core platform when tested with 64-byte Ethernet packets against 10K ACL rules. Keywords-algorithm; classification; multi-core; performance I

    Hardware support for real-time network security and packet classification using field programmable gate arrays

    Get PDF
    Deep packet inspection and packet classification are the most computationally expensive operations in a Network Intrusion Detection (NID) system. Deep packet inspection involves content matching where the payload of the incoming packets is matched against a set of signatures in the database. Packet classification involves inspection of the packet header fields and is basically a multi-dimensional matching problem. Any matching in software is very slow in comparison to current network speeds. Also, both of these problems need a solution which is scalable and can work at high speeds. Due to the high complexity of these matching problems, only Field-Programmable Gate Array (FPGA) or Application-Specific Integrated Circuit (ASIC) platforms can facilitate efficient designs. Two novel FPGA-based NID solutions were developed and implemented that not only carry out pattern matching at high speed but also allow changes to the set of stored patterns without resource/hardware reconfiguration; to their advantage, the solutions can easily be adopted by software or ASIC approaches as well. In both solutions, the proposed NID system can run while pattern updates occur. The designs can operate at 2.4 Gbps line rates, and have a memory consumption of around 17 bits per character and a logic cell usage of around 0.05 logic cells per character, which are the smallest compared to any other existing FPGA-based solution. In addition to these solutions for pattern matching, a novel packet classification algorithm was developed and implemented on a FPGA. The method involves a two-field matching process at a time that then combines the constituent results to identify longer matches involving more header fields. The design can achieve a throughput larger than 9.72 Gbps and has an on-chip memory consumption of around 256Kbytes when dealing with more than 10,000 rules (without using external RAM). This memory consumption is the lowest among all the previously proposed FPGA-based designs for packet classification

    Design and Evaluation of Packet Classification Systems, Doctoral Dissertation, December 2006

    Get PDF
    Although many algorithms and architectures have been proposed, the design of efficient packet classification systems remains a challenging problem. The diversity of filter specifications, the scale of filter sets, and the throughput requirements of high speed networks all contribute to the difficulty. We need to review the algorithms from a high-level point-of-view in order to advance the study. This level of understanding can lead to significant performance improvements. In this dissertation, we evaluate several existing algorithms and present several new algorithms as well. The previous evaluation results for existing algorithms are not convincing because they have not been done in a consistent way. To resolve this issue, an objective evaluation platform needs to be developed. We implement and evaluate several representative algorithms with uniform criteria. The source code and the evaluation results are both published on a web-site to provide the research community a benchmark for impartial and thorough algorithm evaluations. We propose several new algorithms to deal with the different variations of the packet classification problem. They are: (1) the Shape Shifting Trie algorithm for longest prefix matching, used in IP lookups or as a building block for general packet classification algorithms; (2) the Fast Hash Table lookup algorithm used for exact flow match; (3) the longest prefix matching algorithm using hash tables and tries, used in IP lookups or packet classification algorithms;(4) the 2D coarse-grained tuple-space search algorithm with controlled filter expansion, used for two-dimensional packet classification or as a building block for general packet classification algorithms; (5) the Adaptive Binary Cutting algorithm used for general multi-dimensional packet classification. In addition to the algorithmic solutions, we also consider the TCAM hardware solution. In particular, we address the TCAM filter update problem for general packet classification and provide an efficient algorithm. Building upon the previous work, these algorithms significantly improve the performance of packet classification systems and set a solid foundation for further study

    Range Searching and Point Location among Fat Objects

    No full text
    We present a data structure that can store a set of disjoint fat objects in d-space such that point location and bounded-size range searching with arbitrarily-shaped ranges can be performed efficiently. The structure can deal with either arbitrary (fat) convex objects or non-convex polytopes. The multi-purpose data structure supports point location and range searching queries in time O(log d\Gamma1 n) and requires O(n log d\Gamma1 n) storage, after O(n log d\Gamma1 n log log n) preprocessing. The data structure and query algorithm are rather simple. 1 Introduction Fatness turns out to be an interesting phenomenon in computational geometry. Several papers present surprising combinatorial complexity reductions [3, 15, 22, 26, 32] and efficiency gains for algorithms [1, 4, 19, 28, 33] if the objects under consideration have a certain fatness. Fat objects are compact to some extent, rather than long and thin. Fatness is a realistic assumption, since in many practical instances of ..

    AcceCuts: un algorithme de classification de paquets conçu pour traiter les nouveaux paradigmes des réseaux définis par logiciel

    Get PDF
    RÉSUMÉ La classification de paquets est une étape cruciale et préliminaire à n’importe quel traitement au sein des routeurs et commutateur réseaux (« switch »). De nombreuses contributions sont présentes dans la littérature, que cela soit au niveau purement algorithmique, ou ayant mené à une implémentation. Néanmoins, le contexte étudié ne correspond pas au virage du Software Defined Networking (SDN, ou réseau défini par logiciel) pris dans le domaine de la réseautique. Or, la flexibilité introduite par SDN modifie profondément le paysage de la classification de paquets. Ainsi, les algorithmes doivent à présent supporter un très grand nombre de règles complexes. Dans le cadre de ce travail, on s'intéresse aux algorithmes de classification de paquets dans le contexte de SDN. Le but est d’accélérer l’étape de classification de paquets et de proposer un algorithme de classification, capable d’offrir des performances de premier plan dans le contexte de SDN, mais aussi, offrant des performances acceptables dans un contexte classique. A cet égard, une évaluation d’EffiCuts, un des algorithmes offrant la meilleure performance, est effectuée dans un contexte de SDN. Trois optimisations sont proposées; le Adaptive grouping factor qui permet d’adapter l’algorithme aux caractéristiques de la table de classification utilisée, le Leaf size modulation, visant à déterminer la taille optimale d’une feuille dans le contexte de SDN et enfin, une modification de l’heuristique utilisée pour déterminer le nombre de découpe à effectuer au niveau de chacun des nœuds, permettant de réaliser un nombre de découpes réduit. Ces trois optimisations permettent une augmentation des performances substantielle par rapport à EffiCuts. Néanmoins, de nombreuses données non pertinentes demeurent lues. Ce problème, inhérent à certains algorithmes utilisant des arbres de décision (plus précisément HiCuts et ses descendants), tend à ajouter un nombre significatif d’accès mémoire superflus. Ainsi, un nouvel algorithme, est proposé. Cet algorithme nommé AcceCuts, s'attaque à l’ensemble des problèmes identifiés. Ce dernier reprend les optimisations précédentes, et ajoute une étape de prétraitement au niveau de la feuille, permettant d’éliminer les règles non pertinentes. Une modification majeure de la structure des feuilles, ainsi que de la technique du parcours de l’arbre de décision est donc présentée.----------ABSTRACT Packet Classification remains a hot research topic, as it is a fundamental function in telecommunication networks, which are now facing new challenges. Many contributions have been made in literature, focusing either on designing algorithms, or implementing them on hardware. Nevertheless, the work done is tightly coupled to an outdated context, as Software Defined Networking (SDN) is now the main topic in networking. SDN introduces a high degree of flexibility, either in processing or parsing, which highly impact on the packet classification performance: algorithms have now to handle a very large number of complex rules. We focus this work on packet classification algorithms in SDN context. We aim to accelerate packet classification, and create a new algorithm designed to offer state of the art performance in SDN context, while performing in a classical context. For this purpose, an evaluation of EffiCuts, a state of the art algorithm - in a classical context -, is performed in SDN context. Based on this analysis, three optimizations are proposed: “Adaptive Grouping Factor”, in order to adapt the algorithm behavior to dataset characteristic, “Leaf size modulation”, allowing to choose the most relevant leaf size, and finally adopting a new heuristic to compute the number of cuts at each node, in order to determine an optimal number of cuts. Those three optimizations improve drastically the performance over EffiCuts. Nevertheless, some issues are still not addressed, as many irrelevant data are still read, incurring multiples useless memory accesses. This inherent problem to decision tree based algorithms (HiCuts related algorithms) tends to add unnecessary memory accesses for each tree considered. Therefore, in SDN context, this becomes more critical as many clock cycles are wasted
    corecore