47,299 research outputs found

    FAIR: Forwarding Accountability for Internet Reputability

    Full text link
    This paper presents FAIR, a forwarding accountability mechanism that incentivizes ISPs to apply stricter security policies to their customers. The Autonomous System (AS) of the receiver specifies a traffic profile that the sender AS must adhere to. Transit ASes on the path mark packets. In case of traffic profile violations, the marked packets are used as a proof of misbehavior. FAIR introduces low bandwidth overhead and requires no per-packet and no per-flow state for forwarding. We describe integration with IP and demonstrate a software switch running on commodity hardware that can switch packets at a line rate of 120 Gbps, and can forward 140M minimum-sized packets per second, limited by the hardware I/O subsystem. Moreover, this paper proposes a "suspicious bit" for packet headers - an application that builds on top of FAIR's proofs of misbehavior and flags packets to warn other entities in the network.Comment: 16 pages, 12 figure

    Optimizing Service Differentiation Scheme with Sized-based Queue Management in DiffServ Networks

    Get PDF
    In this paper we introduced Modified Sized-based Queue Management as a dropping scheme that aims to fairly prioritize and allocate more service to VoIP traffic over bulk data like FTP as the former one usually has small packet size with less impact to the network congestion. In the same time, we want to guarantee that this prioritization is fair enough for both traffic types. On the other hand we study the total link delay over the congestive link with the attempt to alleviate this congestion as much as possible at the by function of early congestion notification. Our M-SQM scheme has been evaluated with NS2 experiments to measure the packets received from both and total link-delay for different traffic. The performance evaluation results of M-SQM have been validated and graphically compared with the performance of other three legacy AQMs (RED, RIO, and PI). It is depicted that our M-SQM outperformed these AQMs in providing QoS level of service differentiation.Comment: 10 pages, 9 figures, 1 table, Submitted to Journal of Telecommunication

    Distributed PC Based Routers: Bottleneck Analysis and Architecture Proposal

    Get PDF
    Recent research in the different functional areas of modern routers have made proposals that can greatly increase the efficiency of these machines. Most of these proposals can be implemented quickly and often efficiently in software. We wish to use personal computers as forwarders in a network to utilize the advances made by researchers. We therefore examine the ability of a personal computer to act as a router. We analyze the performance of a single general purpose computer and show that I/O is the primary bottleneck. We then study the performance of distributed router composed of multiple general purpose computers. We study the performance of a star topology and through experimental results we show that although its performance is good, it lacks flexibility in its design. We compare it with a multistage architecture. We conclude with a proposal for an architecture that provides us with a forwarder that is both flexible and scalable.© IEE

    SENATUS: An Approach to Joint Traffic Anomaly Detection and Root Cause Analysis

    Full text link
    In this paper, we propose a novel approach, called SENATUS, for joint traffic anomaly detection and root-cause analysis. Inspired from the concept of a senate, the key idea of the proposed approach is divided into three stages: election, voting and decision. At the election stage, a small number of \nop{traffic flow sets (termed as senator flows)}senator flows are chosen\nop{, which are used} to represent approximately the total (usually huge) set of traffic flows. In the voting stage, anomaly detection is applied on the senator flows and the detected anomalies are correlated to identify the most possible anomalous time bins. Finally in the decision stage, a machine learning technique is applied to the senator flows of each anomalous time bin to find the root cause of the anomalies. We evaluate SENATUS using traffic traces collected from the Pan European network, GEANT, and compare against another approach which detects anomalies using lossless compression of traffic histograms. We show the effectiveness of SENATUS in diagnosing anomaly types: network scans and DoS/DDoS attacks

    High-speed, in-band performance measurement instrumentation for next generation IP networks

    Get PDF
    Facilitating always-on instrumentation of Internet traffic for the purposes of performance measurement is crucial in order to enable accountability of resource usage and automated network control, management and optimisation. This has proven infeasible to date due to the lack of native measurement mechanisms that can form an integral part of the network‟s main forwarding operation. However, Internet Protocol version 6 (IPv6) specification enables the efficient encoding and processing of optional per-packet information as a native part of the network layer, and this constitutes a strong reason for IPv6 to be adopted as the ubiquitous next generation Internet transport. In this paper we present a very high-speed hardware implementation of in-line measurement, a truly native traffic instrumentation mechanism for the next generation Internet, which facilitates performance measurement of the actual data-carrying traffic at small timescales between two points in the network. This system is designed to operate as part of the routers' fast path and to incur an absolutely minimal impact on the network operation even while instrumenting traffic between the edges of very high capacity links. Our results show that the implementation can be easily accommodated by current FPGA technology, and real Internet traffic traces verify that the overhead incurred by instrumenting every packet over a 10 Gb/s operational backbone link carrying a typical workload is indeed negligible

    The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena

    Full text link
    The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex System
    corecore