2,080 research outputs found

    Anagram: A Content Anomaly Detector Resistant to Mimicry Attack

    Get PDF
    In this paper, we present Anagram, a content anomaly detector that models a mixture of high-order n-grams (n > 1) designed to detect anomalous and suspicious network packet payloads. By using higher- order n-grams, Anagram can detect significant anomalous byte sequences and generate robust signatures of validated malicious packet content. The Anagram content models are implemented using highly efficient Bloom filters, reducing space requirements and enabling privacy-preserving cross-site correlation. The sensor models the distinct content flow of a network or host using a semi- supervised training regimen. Previously known exploits, extracted from the signatures of an IDS, are likewise modeled in a Bloom filter and are used during training as well as detection time. We demonstrate that Anagram can identify anomalous traffic with high accuracy and low false positive rates. Anagram’s high-order n-gram analysis technique is also resilient against simple mimicry attacks that blend exploits with normal appearing byte padding, such as the blended polymorphic attack recently demonstrated in. We discuss randomized n-gram models, which further raises the bar and makes it more difficult for attackers to build precise packet structures to evade Anagram even if they know the distribution of the local site content flow. Finally, Anagram-’s speed and high detection rate makes it valuable not only as a standalone sensor, but also as a network anomaly flow classifier in an instrumented fault-tolerant host-based environment; this enables significant cost amortization and the possibility of a symbiotic feedback loop that can improve accuracy and reduce false positive rates over time

    Network intrusion detection with semantics-aware capability

    Get PDF
    © 2006 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Pre-print of article that appeared at the 2 nd International Workshop on Security i

    A Network Worm Vaccine Architecture

    Get PDF
    The ability of worms to spread at rates that effectively preclude human-directed reaction has elevated them to a first-class security threat to distributed systems. We present the first reaction mechanism that seeks to automatically patch vulnerable software. Our system employs a collection of sensors that detect and capture potential worm infection vectors. We automatically test the effects of these vectors on appropriately-instrumented sandboxed instances of the targeted application, trying to identify the exploited software weakness. Our heuristics allow us to automatically generate patches that can protect against certain classes of attack, and test the resistance of the patched application against the infection vector. We describe our system architecture, discuss the various components, and propose directions for future research

    On the Adaptive Real-Time Detection of Fast-Propagating Network Worms

    Get PDF
    We present two light-weight worm detection algorithms thatoffer significant advantages over fixed-threshold methods.The first algorithm, RBS (rate-based sequential hypothesis testing)aims at the large class of worms that attempts to quickly propagate, thusexhibiting abnormal levels of the rate at which hosts initiateconnections to new destinations. The foundation of RBS derives fromthe theory of sequential hypothesis testing, the use of which fordetecting randomly scanning hosts was first introduced by our previouswork with the TRW (Threshold Random Walk) scan detection algorithm. The sequential hypothesistesting methodology enables engineering the detectors to meet falsepositives and false negatives targets, rather than triggering whenfixed thresholds are crossed. In this sense, the detectors that weintroduce are truly adaptive.We then introduce RBS+TRW, an algorithm that combines fan-out rate (RBS)and probability of failure (TRW) of connections to new destinations.RBS+TRW provides a unified framework that at one end acts as a pure RBSand at the other end as pure TRW, and extends RBS's power in detectingworms that scan randomly selected IP addresses

    An analysis of malware evasion techniques against modern AV engines

    Get PDF
    This research empirically tested the response of antivirus applications to binaries that use virus-like evasion techniques. In order to achieve this, a number of binaries are processed using a number of evasion methods and are then deployed against several antivirus engines. The research also documents the process of setting up an environment for testing antivirus engines, including building the evasion techniques used in the tests. The results of the empirical tests illustrate that an attacker can evade multiple antivirus engines without much effort using well-known evasion techniques. Furthermore, some antivirus engines may respond to the occurrence of an evasion technique instead of the presence of any malicious code. In practical terms, this shows that while antivirus applications are useful for protecting against known threats, their effectiveness against unknown or modified threats is limited

    Shadow Honeypots

    Get PDF
    We present Shadow Honeypots, a novel hybrid architecture that combines the best features of honeypots and anomaly detection. At a high level, we use a variety of anomaly detectors to monitor all traffic to a protected network or service. Traffic that is considered anomalous is processed by a "shadow honeypot" to determine the accuracy of the anomaly prediction. The shadow is an instance of the protected software that shares all internal state with a regular ("production") instance of the application, and is instrumented to detect potential attacks. Attacks against the shadow are caught, and any incurred state changes are discarded. Legitimate traffic that was misclassified will be validated by the shadow and will be handled correctly by the system transparently to the end user. The outcome of processing a request by the shadow is used to filter future attack instances and could be used to update the anomaly detector. Our architecture allows system designers to fine-tune systems for performance, since false positives will be filtered by the shadow. We demonstrate the feasibility of our approach in a proof-of-concept implementation of the Shadow Honeypot architecture for the Apache web server and the Mozilla Firefox browser. We show that despite a considerable overhead in the instrumentation of the shadow honeypot (up to 20% for Apache), the overall impact on the system is diminished by the ability to minimize the rate of false-positives

    Countering Network Worms Through Automatic Patch Generation

    Full text link
    • …
    corecore