1,123 research outputs found

    High-speed, in-band performance measurement instrumentation for next generation IP networks

    Get PDF
    Facilitating always-on instrumentation of Internet traffic for the purposes of performance measurement is crucial in order to enable accountability of resource usage and automated network control, management and optimisation. This has proven infeasible to date due to the lack of native measurement mechanisms that can form an integral part of the network‟s main forwarding operation. However, Internet Protocol version 6 (IPv6) specification enables the efficient encoding and processing of optional per-packet information as a native part of the network layer, and this constitutes a strong reason for IPv6 to be adopted as the ubiquitous next generation Internet transport. In this paper we present a very high-speed hardware implementation of in-line measurement, a truly native traffic instrumentation mechanism for the next generation Internet, which facilitates performance measurement of the actual data-carrying traffic at small timescales between two points in the network. This system is designed to operate as part of the routers' fast path and to incur an absolutely minimal impact on the network operation even while instrumenting traffic between the edges of very high capacity links. Our results show that the implementation can be easily accommodated by current FPGA technology, and real Internet traffic traces verify that the overhead incurred by instrumenting every packet over a 10 Gb/s operational backbone link carrying a typical workload is indeed negligible

    The Dynamics of Internet Traffic: Self-Similarity, Self-Organization, and Complex Phenomena

    Full text link
    The Internet is the most complex system ever created in human history. Therefore, its dynamics and traffic unsurprisingly take on a rich variety of complex dynamics, self-organization, and other phenomena that have been researched for years. This paper is a review of the complex dynamics of Internet traffic. Departing from normal treatises, we will take a view from both the network engineering and physics perspectives showing the strengths and weaknesses as well as insights of both. In addition, many less covered phenomena such as traffic oscillations, large-scale effects of worm traffic, and comparisons of the Internet and biological models will be covered.Comment: 63 pages, 7 figures, 7 tables, submitted to Advances in Complex System

    Mitigating the impact of packet reordering to maximize performance of multimedia applications

    Get PDF
    We propose a solution to mitigate the performance degradation and corresponding Quality of Experience (QoE) reduction caused by packet reordering for multimedia applications which utilise unreliable transport protocols like the Datagram Congestion Control Protocol (DCCP). We analytically derive the optimum buffer size based on the applications data rate and the maximum delay tolerated by the multimedia application. We propose a dynamically adjustable buffer in the transport protocol receiver which uses this optimum buffer size. We demonstrate, via simulation results, that our solution reduces the packet loss rate, increases the perceived bandwidth and does not increase jitter in the received applications packets while still being within the application's delay limits, therefore resulting in an increased QoE of multimedia applications

    A comparative study of aggregate TCP retransmission rates

    Full text link
    Segment retransmissions are an essential tool in assuring reliable end-to-end communication in the Internet. Their crucial role in TCP design and operation has been studied extensively, in particular with respect to identifying non-conformant, buggy, or underperforming behaviour. However, TCP segment retransmissions are often overlooked when examining and analyzing large traffic traces. In fact, some have come to believe that retransmissions are a rare oddity, characteristically associated with faulty network paths, which, typically, tend to disappear as networking technology advances and link capacities grow. We find that this may be far from the reality experienced by TCP flows. We quantify aggregate TCP segment retransmission rates using publicly available network traces from six passive monitoring points attached to the egress gateways at large sites. In virtually half of the traces examined we observed aggregate TCP retransmission rates exceeding 1%, and of these, about half again had retransmission rates exceeding 2%. Even for sites with low utilization and high capacity gateway links, retransmission rates of 1%, and sometimes higher, were not uncommon. Our results complement, extend and bring up to date partial and incomplete results in previous work, and show that TCP retransmissions continue to constitute a non-negligible percentage of the overall traffic, despite significant advances across the board in telecommunications technologies and network protocols. The results presented are pertinent to end-to-end protocol designers and evaluators as they provide a range of "realistic" scenarios under which, and a "marker" against which, simulation studies can be configured and calibrated, and future protocols evaluated

    DoS protection for a Pragmatic Multiservice Network Based on Programmable Networks

    Get PDF
    Proceedings of First International IFIP TC6 Conference, AN 2006, Paris, France, September 27-29, 2006.We propose a scenario of a multiservice network, based on pragmatic ideas of programmable networks. Active routers are capable of processing both active and legacy packets. This scenario is vulnerable to a Denial of Service attack, which consists in inserting false legacy packets into active routers. We propose a mechanism for detecting the injection of fake legacy packets into active routers. This mechanism consists in exchanging accounting information on the traffic between neighboring active routers. The exchange of accounting information must be carried out in a secure way using secure active packets. The proposed mechanism is sensitive to the loss of packets. To deal with this problem some improvements in the mechanism has been proposed. An important issue is the procedure for discharging packets when an attack has been detected. We propose an easy and efficient mechanism that would be improved in future work.Publicad

    Segment Routing: a Comprehensive Survey of Research Activities, Standardization Efforts and Implementation Results

    Full text link
    Fixed and mobile telecom operators, enterprise network operators and cloud providers strive to face the challenging demands coming from the evolution of IP networks (e.g. huge bandwidth requirements, integration of billions of devices and millions of services in the cloud). Proposed in the early 2010s, Segment Routing (SR) architecture helps face these challenging demands, and it is currently being adopted and deployed. SR architecture is based on the concept of source routing and has interesting scalability properties, as it dramatically reduces the amount of state information to be configured in the core nodes to support complex services. SR architecture was first implemented with the MPLS dataplane and then, quite recently, with the IPv6 dataplane (SRv6). IPv6 SR architecture (SRv6) has been extended from the simple steering of packets across nodes to a general network programming approach, making it very suitable for use cases such as Service Function Chaining and Network Function Virtualization. In this paper we present a tutorial and a comprehensive survey on SR technology, analyzing standardization efforts, patents, research activities and implementation results. We start with an introduction on the motivations for Segment Routing and an overview of its evolution and standardization. Then, we provide a tutorial on Segment Routing technology, with a focus on the novel SRv6 solution. We discuss the standardization efforts and the patents providing details on the most important documents and mentioning other ongoing activities. We then thoroughly analyze research activities according to a taxonomy. We have identified 8 main categories during our analysis of the current state of play: Monitoring, Traffic Engineering, Failure Recovery, Centrally Controlled Architectures, Path Encoding, Network Programming, Performance Evaluation and Miscellaneous...Comment: SUBMITTED TO IEEE COMMUNICATIONS SURVEYS & TUTORIAL

    Scale-free networks and scalable interdomain routing

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaThe exponential growth of the Internet, due to its tremendous success, has brought to light some limitations of the current design at the routing and arquitectural level, such as scalability and convergence as well as the lack of support for traffic engineering, mobility, route differentiation and security. Some of these issues arise from the design of the current architecture, while others are caused by the interdomain routing scheme - BGP. Since it would be quite difficult to add support for the aforementioned issues, both in the interdomain architecture and in the in the routing scheme, various researchers believe that a solution can only achieved via a new architecture and (possibly) a new routing scheme. A new routing strategy has emerged from the studies regarding large-scale networks, which is suitable for a special type of large-scale networks which characteristics are independent of network size: scale-free networks. Using the greedy routing strategy a node routes a message to a given destination using only the information regarding the destination and its neighbours, choosing the one which is closest to the destination. This routing strategy ensures the following remarkable properties: routing state in the order of the number of neighbours; no requirements on nodes to exchange messages in order to perform routing; chosen paths are the shortest ones. This dissertation aims at: studying the aforementioned problems, studying the Internet configuration as a scale-free network, and defining a preliminary path onto the definition of a greedy routing scheme for interdomain routing

    Tiered Based Addressing in Internetwork Routing Protocols for the Future Internet

    Get PDF
    The current Internet has exhibited a remarkable sustenance to evolution and growth; however, it is facing unprecedented challenges and may not be able to continue to sustain this evolution and growth in the future because it is based on design decisions made in the 1970s when the TCP/IP concepts were developed. The research thus has provided incremental solutions to the evolving Internet to address every new vulnerabilities. As a result, the Internet has increased in complexity, which makes it hard to manage, more vulnerable to emerging threats, and more fragile in the face of new requirements. With a goal towards overcoming this situation, a clean-slate future Internet architecture design paradigm has been suggested by the research communities. This research is focused on addressing and routing for a clean-slate future Internet architecture, called the Floating Cloud Tiered (FCT) internetworking model. The major goals of this study are: (i) to address the two related problems of routing scalability and addressing, through an approach which would leverage the existing structures in the current Internet architecture, (ii) to propose a solution that is acceptable to the ISP community that supports the Internet, and lastly (iii) to provide a transition platform and mechanism which is very essential to the successful deployment of the proposed design
    corecore