59 research outputs found

    Hardware acceleration for power efficient deep packet inspection

    Get PDF
    The rapid growth of the Internet leads to a massive spread of malicious attacks like viruses and malwares, making the safety of online activity a major concern. The use of Network Intrusion Detection Systems (NIDS) is an effective method to safeguard the Internet. One key procedure in NIDS is Deep Packet Inspection (DPI). DPI can examine the contents of a packet and take actions on the packets based on predefined rules. In this thesis, DPI is mainly discussed in the context of security applications. However, DPI can also be used for bandwidth management and network surveillance. DPI inspects the whole packet payload, and due to this and the complexity of the inspection rules, DPI algorithms consume significant amounts of resources including time, memory and energy. The aim of this thesis is to design hardware accelerated methods for memory and energy efficient high-speed DPI. The patterns in packet payloads, especially complex patterns, can be efficiently represented by regular expressions, which can be translated by the use of Deterministic Finite Automata (DFA). DFA algorithms are fast but consume very large amounts of memory with certain kinds of regular expressions. In this thesis, memory efficient algorithms are proposed based on the transition compressions of the DFAs. In this work, Bloom filters are used to implement DPI on an FPGA for hardware acceleration with the design of a parallel architecture. Furthermore, devoted at a balance of power and performance, an energy efficient adaptive Bloom filter is designed with the capability of adjusting the number of active hash functions according to current workload. In addition, a method is given for implementation on both two-stage and multi-stage platforms. Nevertheless, false positive rates still prevents the Bloom filter from extensive utilization; a cache-based counting Bloom filter is presented in this work to get rid of the false positives for fast and precise matching. Finally, in future work, in order to estimate the effect of power savings, models will be built for routers and DPI, which will also analyze the latency impact of dynamic frequency adaption to current traffic. Besides, a low power DPI system will be designed with a single or multiple DPI engines. Results and evaluation of the low power DPI model and system will be produced in future

    Distance learning plan for the Defense Finance and Accounting Service (DFAS): a study for the Defense Business Management University (DBMU)

    Get PDF
    This thesis analyzes the requirements and design considerations of a video teletraining (VIT) delivery system for 25 Defense Finance and Accounting Service (DFAS) centers located throughout the continental United States. Current DFAS VTT capabilities are reviewed and included. The study's sponsor, The Defense Business Management University (DBMU). bas been tasked by the DoD Comptroller to implement a training program for these centers. The DBMU has identified VTT as an extremely cost-effective option for training personnel at these 25 DFAS satellite adivities. The study focuses on current VTT technologies-both in industry and in the DoD. Basic VTT concepts are presented, evolving VTT standards are discussed, existing DoD VIT infrastructures are outlined, and problem areas such as system interoperability are explored. The study presents recommendations for an immediale DFAS VTT implementation plan using available DoD one-way/two-way satellite and/or two-way/two-way terrestrial distance education capabilities. This thesis also presents a recommendation for integration of a long term V1T network broadacast system including a single site program origination studio.http://archive.org/details/distancelearning1094543029U.S. Naval Reserve (USNR) author.;U.S. Navy (USN) authorApproved for public release; distribution is unlimited

    Fast Packet Processing on High Performance Architectures

    Get PDF
    The rapid growth of Internet and the fast emergence of new network applications have brought great challenges and complex issues in deploying high-speed and QoS guaranteed IP network. For this reason packet classication and network intrusion detection have assumed a key role in modern communication networks in order to provide Qos and security. In this thesis we describe a number of the most advanced solutions to these tasks. We introduce NetFPGA and Network Processors as reference platforms both for the design and the implementation of the solutions and algorithms described in this thesis. The rise in links capacity reduces the time available to network devices for packet processing. For this reason, we show different solutions which, either by heuristic and randomization or by smart construction of state machine, allow IP lookup, packet classification and deep packet inspection to be fast in real devices based on high speed platforms such as NetFPGA or Network Processors

    FPGA-based High Throughput Regular Expression Pattern Matching for Network Intrusion Detection Systems

    Get PDF
    Network speeds and bandwidths have improved over time. However, the frequency of network attacks and illegal accesses have also increased as the network speeds and bandwidths improved over time. Such attacks are capable of compromising the privacy and confidentiality of network resources belonging to even the most secure networks. Currently, general-purpose processor based software solutions used for detecting network attacks have become inadequate in coping with the current network speeds. Hardware-based platforms are designed to cope with the rising network speeds measured in several gigabits per seconds (Gbps). Such hardware-based platforms are capable of detecting several attacks at once, and a good candidate is the Field-programmable Gate Array (FPGA). The FPGA is a hardware platform that can be used to perform deep packet inspection of network packet contents at high speed. As such, this thesis focused on studying designs that were implemented with Field-programmable Gate Arrays (FPGAs). Furthermore, all the FPGA-based designs studied in this thesis have attempted to sustain a more steady growth in throughput and throughput efficiency. Throughput efficiency is defined as the concurrent throughput of a regular expression matching engine circuit divided by the average number of look up tables (LUTs) utilised by each state of the engine"s automata. The implemented FPGA-based design was built upon the concept of equivalence classification. The concept helped to reduce the overall table size of the inputs needed to drive the various Nondeterministic Finite Automata (NFA) matching engines. Compared with other approaches, the design sustained a throughput of up to 11.48 Gbps, and recorded an overall reduction in the number of pattern matching engines required by up to 75%. Also, the overall memory required by the design was reduced by about 90% when synthesised on the target FPGA platform

    String Matching Problems with Parallel Approaches An Evaluation for the Most Recent Studies

    Get PDF
    In recent years string matching plays a functional role in many application like information retrieval, gene analysis, pattern recognition, linguistics, bioinformatics etc. For understanding the functional requirements of string matching algorithms, we surveyed the real time parallel string matching patterns to handle the current trends. Primarily, in this paper, we focus on present developments of parallel string matching, and the central ideas of the algorithms and their complexities. We present the performance of the different algorithms and their effectiveness. Finally this analysis helps the researchers to develop the better techniques

    Memory-Efficient Regular Expression Search Using State Merging

    Full text link
    Abstract — Pattern matching is a crucial task in several critical network services such as intrusion detection and policy man-agement. As the complexity of rule-sets increases, traditional string matching engines are being replaced by more sophisticated regular expression engines. To keep up with line rates, deal with denial of service attacks and provide predictable resource provisioning, the design of such engines must allow examining payload traffic at several gigabits per second and provide worst case speed guarantees. While regular expression matching using deterministic finite automata (DFA) is a well studied problem in theory, its implementation either in software or specialized hardware is complicated by prohibitive memory requirements. This is especially true for DFAs representing complex regular expressions present in practical rule-sets. In this paper, we introduce a novel method to drastically reduce the DFA memory requirement and still provide worst-case speed guarantees. Specifically, we merge several “non-equivalent” states in a DFA by introducing labels on their input and output transitions. We then propose a data structure to represent the merged states and the transition labels. We show that, with very few assumptions about the original DFA, such a transformation results in significant compression in the DFA representation. We have implemented a state merging and transition labeling algorithm for DFAs, and show that for Snort and Bro security rule-sets, state merging results in memory reductions of an order of magnitude. I

    Reductions of Automata Used in Network Traffic Filtering

    Get PDF
    CieÄŸom tejto prĂĄce je navrhnĂșĆ„ ĆĄkĂĄlovateÄŸnĂ© metĂłdy pre redukciu nedeterministickĂœch konečnĂœch automatov pouĆŸĂ­vanĂœch vo filtrĂĄcii paketov. UvĂĄdzame dva prĂ­sty redukcie automatov zaloĆŸenĂ© na elminĂĄcii stavov. Aby sme dosiahli vĂœznamnĂș redukciu automatu, pouĆŸĂ­vame techniky nezachovĂĄvajĂșce jazyk so zameranĂ­m na nad-aproximĂĄciu, keÄĆŸe redukcie so zachovanĂ­m pĂŽvodnĂ©ho jazyka nemusia byĆ„ dostatočne ĂșčinnĂ©. Implementovali sme danĂ© metĂłdy a vyhodnotili presnosĆ„ redukovanĂœch automatov na reĂĄlnych vzorkoch. NĂĄĆĄ prĂ­stup neposkytuje ĆŸiadne formĂĄle zĂĄruky vzhÄŸadom na nepouĆŸitĂ© dĂĄta, ale moĆŸe byĆ„ hladko pouĆŸitĂœ na automaty akejkoÄŸvek veÄŸkosti, čo je hlavnĂœ problĂ©m existujĂșcich metĂłd, ktorĂ© majĂș vysokou časovou zloĆŸitosĆ„ou a nemĂŽĆŸu byĆ„ aplikovanĂ© na veÄŸkĂ© automaty.The aim of this work is to propose scalable methods for reducing non-deterministic finite automata used in network traffic filtering. We introduce two approaches of NFAs reduction based on states elimination. To achieve a substantial reduction of automata, we use language non-preserving techniques with a primary focus on language over-approximation, since language preserving methods may not provide sufficient reduction. We implemented the methods and evaluated the accuracy of the reduced automata on real traffic. Our approach does not provide any formal guarantee wrt unseen input traffic, but on the other hand, it can be smoothly used on automata of any size, which is a significant problem for existing methods that have very high time complexity and cannot be applied on really large automata.
    • 

    corecore