79 research outputs found

    Third-Party TCP Rate Control

    Get PDF
    The Transmission Control Protocol (TCP) is the dominant transport protocol in today?s Internet. The original design of TCP left congestion control open to future designers. Short of implementing changes to the TCP stack on the end-nodes themselves, Internet Service Providers have employed several techniques to be able to operate their network equipment efficiently. These techniques amount to shaping traffic to reduce cost and improve overall customer satisfaction. The method that gives maximum control when performing traffic shaping is using an inline traffic shaper. An inline traffic shaper sits in the middle of any flow, allowing packets to pass through it and, with policy-limited freedom, inspects and modifies all packets as it pleases. However, a number of practical issues such as hardware reliability or ISP policy, may prevent such a solution from being employed. For example, an ISP that does not fully trust the quality of the traffic shaper would not want such a product to be placed in-line with its equipment, as it places a significant threat to its business. What is required in such cases is third-party rate control. Formally defined, a third-party rate controller is one that can see all traffic and inject new traffic into the network, but cannot remove or modify existing network packets. Given these restrictions, we present and study a technique to control TCP flows, namely triple-ACK duplication. The triple-ACK algorithm allows significant capabilities to a third-party traffic shaper. We provide an analytical justification for why this technique works under ideal conditions and demonstrate via simulation the bandwidth reduction achieved. When judiciously applied, the triple-ACK duplication technique produces minimal badput, while producing significant reductions in bandwidth consumption under ideal conditions. Based on a brief study, we show that our algorithm is able to selectively throttle one flow while allowing another to gain in bandwidth

    DTMsim - DTM channel simulation in ns

    Get PDF
    Dynamic Transfer Mode (DTM) is a ring based MAN technology that provides a channel abstraction with a dynamically adjustable capacity. TCP is a reliable end to end transport protocol capable of adjusting its rate. The primary goal of this work is investigate the coupling of dynamically allocating bandwidth to TCP flows with the affect this has on the congestion control mechanism of TCP. In particular we wanted to find scenerios where this scheme does not work, where either all the link capacity is allocated to TCP or congestion collapse occurs and no capacity is allocated to TCP. We have created a simulation environment using ns-2 to investigate TCP over networks which have a variable capacity link. We begin with a single TCP Tahoe flow over a fixed bandwidth link and progressively add more complexity to understand the behaviour of dynamically adjusting link capacity to TCP and vice versa

    An Investigation paper on Congestion Control Policy

    Get PDF
    In the advancement of network technology traffic affects performance factors like file synchronization, communication throughput and overall all the scenario where one are having continues connectivity of business to the rectifying organizations, all the business functionality are going for the achievement of paperless communication infrastructure where one are deploying all his working by the internet technology such high speed network communication infrastructure always required well suited congestion less architecture scheme to achieve quality of services ,In the field of different communication area there are many applications has be running at present for all type of users likesenior, junior and all business users categories. Network proposed many protocols and algorithm for the improvement in flow of the network,Ethernet project 802 and protocols series 802.11a/b/c/d/e has been working at network layer in which user’s found some times network work well with priorities choice of network connection but sometimesstruggles with the same.To analyze the actual report of congestion policies here author of the paper are presenting analytical study so that one can understand the lack of functionality in communication with specific network implementations

    SSthreshless Start: A Sender-Side TCP Intelligence for Long Fat Network

    Full text link
    Measurement shows that 85% of TCP flows in the internet are short-lived flows that stay most of their operation in the TCP startup phase. However, many previous studies indicate that the traditional TCP Slow Start algorithm does not perform well, especially in long fat networks. Two obvious problems are known to impact the Slow Start performance, which are the blind initial setting of the Slow Start threshold and the aggressive increase of the probing rate during the startup phase regardless of the buffer sizes along the path. Current efforts focusing on tuning the Slow Start threshold and/or probing rate during the startup phase have not been considered very effective, which has prompted an investigation with a different approach. In this paper, we present a novel TCP startup method, called threshold-less slow start or SSthreshless Start, which does not need the Slow Start threshold to operate. Instead, SSthreshless Start uses the backlog status at bottleneck buffer to adaptively adjust probing rate which allows better seizing of the available bandwidth. Comparing to the traditional and other major modified startup methods, our simulation results show that SSthreshless Start achieves significant performance improvement during the startup phase. Moreover, SSthreshless Start scales well with a wide range of buffer size, propagation delay and network bandwidth. Besides, it shows excellent friendliness when operating simultaneously with the currently popular TCP NewReno connections.Comment: 25 pages, 10 figures, 7 table

    Network-Supported TCP Rate Control for the Coexistence of Multiple and Different Types of Flows on IP over PLC

    Get PDF
    With the approval of IEEE 1901 standard for power line communications (PLC) and the recent Internet-enable home appliances like the IPTV having access to a content-on-demand service through the Internet as AcTVila in Japan, there is no doubt that PLC has taken a great step forward to emerge as the preeminent in-home-network technology. However, existing schemes developed so far have not considered the PLC network connected to an unstable Internet environment (i.e. more realistic situation). In this paper, we investigate the communication performance from the end-user\u27s perspective in networks with large and variable round-trip time (RTT) and with the existence of cross-traffic. Then, we address the problem of unfair bandwidth allocation when multiple and different types of flows coexist and propose a TCP rate control considering the difference in terms of end-to-end delay to solve it. We validate our methodology through simulations, and show that it effectively deals with the throughput unfairness problem under critical communication environment, where multiple flows with different RTTs share the PLC and cross-traffic exists on the path of the Internet

    Bandwidth management and quality of service

    Get PDF
    With the advent of bandwidth-hungry video and audio applications, demand for bandwidth is expected to exceed supply. Users will require more bandwidth and, as always, there are likely to be more users. As the Internet user base becomes more diverse, there is an increasing perception that Internet Service Providers (ISPs) should be able to differentiate between users, so that the specific needs of different types of users can be met. Differentiated services is seen as a possible solution to the bandwidth problem. Currently, however, the technology used on the Internet differentiates neither between users, nor between applications. The thesis focuses on current and anticipated bandwidth shortages on the Internet, and on the lack of a differentiated service. The aim is to identify methods of managing bandwidth and to investigate how these bandwidth management methods can be used to provide a differentiated service. The scope of the study is limited to networks using both Ethernet technology and the Internet Protocol (IP). Tile study is significant because it addresses current problems confronted by network managers. The key terms, Quality of Service (QoS) and bandwidth management, are defined. “QoS” is equated to a differentiating system. Bandwidth management is defined as any method of controlling and allocating bandwidth. Installing more capacity is taken to be a method of bandwidth management. The review of literature concentrates on Ethernet/IP networks. It begins with a detailed examination of definitions and interpretations of the term Quality of Service and shows how the meaning changed over the last decade. The review then examines congestion control, including a survey of queuing methods. Priority queuing implemented in hardware is examined in detail, followed by a review of the ReSource reserVation Protocol (RSVP) and a new version of IP (lPv6). Finally, the new standards IEEE 802.1p and IEEE 802.1Q are outlined, and parts of ISO/IEC 15802-3 are analysed. The Integrated Services Architecture (ISA), Differentiated Services (DiffServ) and MultiProtocol Label Switching (MPLS) are seen as providing a theoretical framework for QoS development. The Open Systems Interconnection Reference Model (OSI model) is chosen as the preferred framework for investigating bandwidth management because it is more comprehensive than the alternative US Department of Defence Model (DoD model). A case study of the Edith Cowan University (ECU) data network illustrates current practice in network management. It provides concrete examples of some of the problems, methods and solutions identified in the literary review. Bandwidth management methods are identified and categorised based on the OSI layers in which they operate. Suggestions are given as to how some of these bandwidth management methods are, or can be used within current QoS architectures. The experimental work consists of two series of tests on small, experimental LANs. The tests are aimed at evaluating the effectiveness of IEEE 802.1 p prioritisation. The results suggest that in small Local Area Networks (LANs) prioritisation provides no benefit when Ethernet switches are lightly loaded
    corecore