20 research outputs found

    Network Monitoring Session Description

    Get PDF
    SUMMARY Network Monitoring is a complex distributed activity: we distinguish agents that issue requests and use of the results, other that operate the monitoring activity and produce observations, glued together by other agents that are in charge of routing requests and results. We illustrate a comprehensive view of a such architecture, taking into account scalability and security requirements, concentrating on the definition of the information exchanged between such agents

    Prototype Implementation Of A Demand Driven Network Monitoring Architecture

    Get PDF
    SUMMARY - The capability of dynamically monitoring the perfomance of the communication infrastructure is one of the emerging requirements for a Grid. We claim that such a capability is in fact orthogonal to the more popular collection of data for scheduling and diagnosis, which needs large storage and indexing capabilities, but may disregard real-time performance issues. We discuss such claim analyzing the gLite NPM architecture, and we describe a novel network monitoring infrastructure specifically designed for demand driven monitoring, named gd2, that can be potentially integrated in the gLite framework. We describe a Java implementation of gd2 on a virtual testbed

    Improving the Accuracy of Network Intrusion Detection Systems Under Load Using Selective Packet Discarding

    No full text
    Under conditions of heavy traffic load or sudden traffic bursts, the peak processing throughput of network intrusion detection systems (NIDS) may not be sufficient for inspecting all monitored traffic, and the packet capturing subsystem inevitably drops excess arriving packets before delivering them to the NIDS. This impedes the detection ability of the system and leads to missed attacks. In this work we present selective packet discarding, a best effort approach that enables the NIDS to anticipate overload conditions and minimize their impact on attack detection. Instead of letting the packet capturing subsystem randomly drop arriving packets, the NIDS proactively discards packets that are less likely to affect its detection accuracy, and focuses on the traffic at the early stages of each network flow. We present the design of selective packet discarding and its implementation in Snort NIDS. Our experiments show that selective packet discarding significantly improves the detection accuracy of Snort under increased traffic load, allowing it to detect attacks that would have otherwise been missed

    RRDtrace: Long-term Raw Network Traffic Recording using Fixed-size Storage

    No full text
    Abstract—Recording raw network traffic for long-term periods can be extremely beneficial for a multitude of monitoring and security applications. However, storing all traffic of high volume networks is infeasible even for short-term periods due to the increased storage requirements. Traditional approaches for data reduction like aggregation and sampling either require knowing the traffic features of interest in advance, or reduce the traffic volume by selecting a representative set of packets uniformly over the collecting period. In this work we present RRDtrace, a technique for storing full-payload packets for arbitrary long periods using fixed-size storage. RRDtrace divides time into intervals and retains a larger number of packets for most recent intervals. As traffic ages, an aging daemon is responsible for dynamically reducing its storage space by keeping smaller representative groups of packets, adapting the sampling rate accordingly. We evaluate the accuracy of RRDtrace on inferring the flow size distribution, distribution of traffic among applications, and percentage of malicious population. Our results show that RRDtrace can accurately estimate these properties using the suitable sampling strategy, some of them for arbitrary long time and others only for a recent period. I

    Automated Generation of Models for Fast and Precise Detection of HTTP-Based Malware

    No full text
    Abstract—Malicious software and especially botnets are among the most important security threats in the Internet. Thus, the accurate and timely detection of such threats is of great importance. Detecting machines infected with malware by identifying their malicious activities at the network level is an appealing approach, due to the ease of deployment. Nowadays, the most common communication channels used by attackers to control the infected machines are based on the HTTP protocol. To evade detection, HTTP-based malware adapt their behavior to the communication patterns of the benign HTTP clients, such as web browsers. This poses significant challenges to existing detection approaches like signature-based and behavioral-based detection systems. In this paper, we propose BOTHOUND: a novel approach to precisely detect HTTP-based malware at the network level. The key idea is that implementations of the HTTP protocol by different entities have small but perceivable differences. Building on this observation, BOTHOUND automatically generates models for malicious and benign requests and classifies at real time the HTTP traffic of a monitored network. Our evaluation results demonstrate that BOTHOUND outperforms prior work on identi-fying HTTP-based botnets, being able to detect a large variety of real-world HTTP-based malware, including advanced persistent threats used in targeted attacks, with a very low percentage of classification errors. I

    Revealing the Relationship Network Behind Link Spam

    No full text
    Abstract—Accessing the large volume of information that is available on the Web is more important than ever before. Search engines are the primary means to help users find the content they need. To suggest the most closely related and the most popular web pages for a user’s query, search engines assign a ranking to each web page, which typically increases with the number and ranking of other websites that link to this page. However, link spammers have developed several techniques to exploit this algorithm and improve the ranking of their web pages. These techniques are commonly based on underground forums for collaborative link exchange; building a relationship network among spammers to favor their web pages in search engine results. In this study, we provide a systematic analysis of the spam link exchange performed through 15 Search Engine Optimization (SEO) forums. We design a system, which is able to capture the activity of link spammers in SEO forums, identify spam link exchange, and visualize the link spam ecosystem. The outcomes of this study shed light on a different aspect of link spamming that is the collaboration among spammers. I

    Improving the Performance of Passive Network Monitoring Applications using Locality Buffering

    No full text
    Abstract—In this paper, we present a novel approach for improving the performance of a large class of CPU and memory intensive passive network monitoring applications, such as intrusion detection systems, traffic characterization applications, and NetFlow export probes. Our approach, called locality buffering, reorders the captured packets by clustering packets with the same destination port, before they are delivered to the monitoring application, resulting to improved code and data locality, and consequently to an overall increase in the packet processing throughput and to a decrease in the packet loss rate. We have implemented locality buffering within the widely used libpcap packet capturing library, which allows existing monitoring applications to transparently benefit from the reordered packet stream without the need to change application code. Our experimental evaluation shows that locality buffering improves significantly the performance of popular applications, such as the Snort IDS, which exhibits a 40 % increase in the packet processing throughput and a 60 % improvement in packet loss rate. I
    corecore