46 research outputs found

    Model based analysis of some high speed network issues

    Get PDF
    The study of complex problems in science and engineering today typically involves large scale data, huge number of large-scale scientific breakthroughs critically depends on large multi-disciplinary and geographically-dispersed research teams, where the high speed network becomes the integral part. To serve the ongoing bandwidth requirement and scalability of these networks, there has been a continuous evolution of different TCPs for high speed networks. Testing these protocols on a real network would be expensive, time consuming and more over not easily available to the researchers worldwide. Network simulation is well accepted and widely used method for performance evaluation, it is well known that packet-based simulators like NS2 and Opnet are not adequate in high speed also in large scale networks because of its inherent bottlenecks in terms of message overhead and execution time. In that case model based approach with the help of a set of coupled differential equations is preferred for simulations. This dissertation is focused on the key challenges on research and development of TCPs on high-speed network. To address these issues/challenges this thesis has three objectives: design an analytical simulation methodology; model behaviors of high speed networks and other components including TCP flows and queue using the analytical simulation method; analyze them and explore impacts and interrelationship among them. To decrease the simulation time and speed up the process of testing and development of high speed TCP, we present a scalable simulation methodology for high speed network. We present the fluid model equations for various high-speed TCP variants. With the help of these fluid model equations, the behavior of high-speed TCP variants under various scenarios and its effect on queue size variations are presented. High speed network is not feasible unless we understand effect of bottleneck buffer size on performance of these high-speed TCP variants. A fluid model is introduced to accommodate the new observations of synchronization and de-synchronization phenomena of packet losses at bottleneck link and a microscopic analysis is presented on different buffer sizes at drop-tail queuing scheme. The proposed model based methods promotes principal understanding of the future heterogeneous networks and accelerates protocol developments

    Throughput and Delay on the Packet Switched Internet

    Get PDF
    The Internet has become a vital and essential part of modern everyday life. Services delivered by the Internet are used by people across the planet every moment of every day of the year. The Internet has proven a positive force for good improving the lives of billions of people worldwide. The power of the Internet to deliver this positive good to humanity relies on its ability to deliver life improving services. In my doctorate work culminating in this dissertation I have striven to sustain and increase the Internet's ability to deliver these services and to have a positive good effect upon humanity.The overarching purpose of this dissertation is to improve the Internet's ability to deliver life improving services. I have further divided this purpose into two goals. To improve the ability of applications operating in challenging network conditions to gain their fair share of the bandwidth resources and to reduce the delay with which these services are delivered. Every service delivered by the Internet consists of Internet objects that are delivered through communication paths across the Internet. The delivery of these objects is defined by the two characteristics; Throughput and delay. Throughput determines how much of an object can be delivered over a period of time and delay determines how long it takes to deliver an object.These two characteristics determine the Internet's ability to deliver objects across communication paths. Improving these two characteristics (bandwidth and delay) increase the ability of the Internet to deliver objects and thus improve the Internet's capability to deliver life improving services. To accomplish this goal I present projects along three areas of effort. These three areas of effort are: (1) Increase the ability of applications operating in challenging conditions to achieve their fair share of bandwidth. (2) Synthesize knowledge required to address the effort to reduce delay. (3) Develop protocols that reduce delay encountered in the communications paths of the Internet.In this dissertation I present projects along these three areas of effort that accomplish the two goals (increase bandwidth and reduce delay) to achieve the purpose of improving the Internet's ability to deliver essential and life improving services. These projects and their organization into areas of effort, goals and purpose are my contributions to the networking sciences

    A study on fairness and latency issues over high speed networks and data center networks

    Get PDF
    Newly emerging computer networks, such as high speed networks and data center networks, have characteristics of high bandwidth and high burstiness which make it difficult to address issues such as fairness, queuing latency and link utilization. In this study, we first conduct extensive experimental evaluation of the performance of 10Gbps high speed networks. We found inter-protocol unfairness and larger queuing latency are two outstanding issues in high speed networks and data center networks. There have been several proposals to address fairness and latency issues at switch level via queuing schemes. These queuing schemes have been fairly successful in addressing either fairness issue or large latency but not both at the same time. We propose a new queuing scheme called Approximated-Fair and Controlled-Delay (AFCD) queuing scheme that meets following goals for high speed networks: approximated fairness, controlled low queuing delay, high link utilization and simple implementation. The design of AFCD utilizes a novel synergistic approach by forming an alliance between approximated fair queuing and controlled delay queuing. AFCD maintains very small amount of state information in sending rate estimation of flows and makes drop decision based on a target delay of individual flow. We then present FaLL, a Fair and Low Latency queuing scheme that meets stringent performance requirements of data center networks: fair share of bandwidth, low queuing latency, high throughput, and ease of deployment. FaLL uses an efficiency module, a fairness module and a target delay based dropping scheme to meet these goals. Through rigorous experiments on real testbed, we show that FaLL outperforms various peer solutions in variety of network conditions over data center networks

    Study on the Performance of TCP over 10Gbps High Speed Networks

    Get PDF
    Internet traffic is expected to grow phenomenally over the next five to ten years. To cope with such large traffic volumes, high-speed networks are expected to scale to capacities of terabits-per-second and beyond. Increasing the role of optics for packet forwarding and transmission inside the high-speed networks seems to be the most promising way to accomplish this capacity scaling. Unfortunately, unlike electronic memory, it remains a formidable challenge to build even a few dozen packets of integrated all-optical buffers. On the other hand, many high-speed networks depend on the TCP/IP protocol for reliability which is typically implemented in software and is sensitive to buffer size. For example, TCP requires a buffer size of bandwidth delay product in switches/routers to maintain nearly 100\% link utilization. Otherwise, the performance will be much downgraded. But such large buffer will challenge hardware design and power consumption, and will generate queuing delay and jitter which again cause problems. Therefore, improve TCP performance over tiny buffered high-speed networks is a top priority. This dissertation studies the TCP performance in 10Gbps high-speed networks. First, a 10Gbps reconfigurable optical networking testbed is developed as a research environment. Second, a 10Gbps traffic sniffing tool is developed for measuring and analyzing TCP performance. New expressions for evaluating TCP loss synchronization are presented by carefully examining the congestion events of TCP. Based on observation, two basic reasons that cause performance problems are studied. We find that minimize TCP loss synchronization and reduce flow burstiness impact are critical keys to improve TCP performance in tiny buffered networks. Finally, we present a new TCP protocol called Multi-Channel TCP and a new congestion control algorithm called Desynchronized Multi-Channel TCP (DMCTCP). Our algorithm implementation takes advantage of a potential parallelism from the Multi-Path TCP in Linux. Over an emulated 10Gbps network ruled by routers with only a few dozen packets of buffers, our experimental results confirm that bottleneck link utilization can be much better improved by DMCTCP than by many other TCP variants. Our study is a new step towards the deployment of optical packet switching/routing networks

    Modeling and estimation techniques for understanding heterogeneous traffic behavior

    Get PDF
    The majority of current internet traffic is based on TCP. With the emergence of new applications, especially new multimedia applications, however, UDP-based traffic is expected to increase. Furthermore, multimedia applications have sparkled the development of protocols responding to congestion while behaving differently from TCP. As a result, network traffc is expected to become more and more diverse. The increasing link capacity further stimulates new applications utilizing higher bandwidths of future. Besides the traffic diversity, the network is also evolving around new technologies. These trends in the Internet motivate our research work. In this dissertation, modeling and estimation techniques of heterogeneous traffic at a router are presented. The idea of the presented techniques is that if the observed queue length and packet drop probability do not match the predictions from a model of responsive (TCP) traffic, then the error must come from non-responsive traffic; it can then be used for estimating the proportion of non-responsive traffic. The proposed scheme is based on the queue length history, packet drop history, expected TCP and queue dynamics. The effectiveness of the proposed techniques over a wide range of traffic scenarios is corroborated using NS-2 based simulations. Possible applications based on the estimation technique are discussed. The implementation of the estimation technique in the Linux kernel is presented in order to validate our estimation technique in a realistic network environment

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    Net Neutrality: Something Old; Something New

    Get PDF
    Article published in the Michigan State Law Review
    corecore