5,135 research outputs found

    Enhancing snort IDs performance using data mining

    Get PDF
    Intrusion detection systems (IDSs) such as Snort apply deep packet inspection to detect intrusions. Usually, these are rule-based systems, where each incoming packet is matched with a set of rules. Each rule consists of two parts: the rule header and the rule options. The rule header is compared with the packet header. The rule options usually contain a signature string that is matched with packet content using an efficient string matching algorithm. The traditional approach to IDS packet inspection checks a packet against the detection rules by scanning from the first rule in the set and continuing to scan all the rules until a match is found. This approach becomes inefficient if the number of rules is too large and if the majority of the packets match with rules located at the end of the rule set. In this thesis, we propose an intelligent predictive technique for packet inspection based on data mining. We consider each rule in a rule set as a ‘class’. A classifier is first trained with labeled training data. Each such labeled data point contains packet header information, packet content summary information, and the corresponding class label (i.e. the rule number with which the packet matches). Then the classifier is used to classify new incoming packets. The predicted class, i.e. rule, is checked against the packet to see if this packet really matches the predicted rule. If it does, the corresponding action (i.e. alert) of the rule is taken. Otherwise, if the prediction of the classifier is wrong, we go back to the traditional way of matching rules. The advantage of this intelligent predictive packet matching is that it offers much faster rule matching. We have proved, both analytically and empirically, that even with millions of real network traffic packets and hundreds of rules, the classifier can achieve very high accuracy, thereby making the IDS several times faster in making matching decisions

    Ultra-high throughput string matching for deep packet inspection

    Get PDF
    Deep Packet Inspection (DPI) involves searching a packet's header and payload against thousands of rules to detect possible attacks. The increase in Internet usage and growing number of attacks which must be searched for has meant hardware acceleration has become essential in the prevention of DPI becoming a bottleneck to a network if used on an edge or core router. In this paper we present a new multi-pattern matching algorithm which can search for the fixed strings contained within these rules at a guaranteed rate of one character per cycle independent of the number of strings or their length. Our algorithm is based on the Aho-Corasick string matching algorithm with our modifications resulting in a memory reduction of over 98% on the strings tested from the Snort ruleset. This allows the search structures needed for matching thousands of strings to be small enough to fit in the on-chip memory of an FPGA. Combined with a simple architecture for hardware, this leads to high throughput and low power consumption. Our hardware implementation uses multiple string matching engines working in parallel to search through packets. It can achieve a throughput of over 40 Gbps (OC-768) when implemented on a Stratix 3 FPGA and over 10 Gbps (OC-192) when implemented on the lower power Cyclone 3 FPGA

    Efficient Multistriding of Large Non-deterministic Finite State Automata for Deep Packet Inspection

    Get PDF
    Multistride automata speed up input matching because each multistriding transformation halves the size of the input string, leading to a potential 2x speedup. However, up to now little effort has been spent in optimizing the building process of multistride automata, with the result that current algorithms cannot be applied to real-life, large automata such as the ones used in commercial IDSs, because the time and the memory space needed to create the new automaton quickly becomes unfeasible. In this paper, new algorithms for efficient building of multistride NFAs for packet inspection are presented, explaining how these new techniques can outperform the previous algorithms in terms of required time and memory usag

    Data Leak Detection As a Service: Challenges and Solutions

    Get PDF
    We describe a network-based data-leak detection (DLD) technique, the main feature of which is that the detection does not require the data owner to reveal the content of the sensitive data. Instead, only a small amount of specialized digests are needed. Our technique – referred to as the fuzzy fingerprint – can be used to detect accidental data leaks due to human errors or application flaws. The privacy-preserving feature of our algorithms minimizes the exposure of sensitive data and enables the data owner to safely delegate the detection to others.We describe how cloud providers can offer their customers data-leak detection as an add-on service with strong privacy guarantees. We perform extensive experimental evaluation on the privacy, efficiency, accuracy and noise tolerance of our techniques. Our evaluation results under various data-leak scenarios and setups show that our method can support accurate detection with very small number of false alarms, even when the presentation of the data has been transformed. It also indicates that the detection accuracy does not degrade when partial digests are used. We further provide a quantifiable method to measure the privacy guarantee offered by our fuzzy fingerprint framework

    The HSS/SNiC : a conceptual framework for collapsing security down to the physical layer

    Get PDF
    This work details the concept of a novel network security model called the Super NIC (SNIC) and a Hybrid Super Switch (HSS). The design will ultimately incorporate deep packet inspection (DPI), intrusion detection and prevention (IDS/IPS) functions, as well as network access control technologies therefore making all end-point network devices inherently secure. The SNIC and HSS functions are modelled using a transparent GNU/Linux Bridge with the Netfilter framework

    Strengthening measurements from the edges: application-level packet loss rate estimation

    Get PDF
    Network users know much less than ISPs, Internet exchanges and content providers about what happens inside the network. Consequently users cannot either easily detect network neutrality violations or readily exercise their market power by knowledgeably switching ISPs. This paper contributes to the ongoing efforts to empower users by proposing two models to estimate -- via application-level measurements -- a key network indicator, i.e., the packet loss rate (PLR) experienced by FTP-like TCP downloads. Controlled, testbed, and large-scale experiments show that the Inverse Mathis model is simpler and more consistent across the whole PLR range, but less accurate than the more advanced Likely Rexmit model for landline connections and moderate PL
    corecore