62,454 research outputs found

    Optimization and Performance Analysis of High Speed Mobile Access Networks

    Get PDF
    The end-to-end performance evaluation of high speed broadband mobile access networks is the main focus of this work. Novel transport network adaptive flow control and enhanced congestion control algorithms are proposed, implemented, tested and validated using a comprehensive High speed packet Access (HSPA) system simulator. The simulation analysis confirms that the aforementioned algorithms are able to provide reliable and guaranteed services for both network operators and end users cost-effectively. Further, two novel analytical models one for congestion control and the other for the combined flow control and congestion control which are based on Markov chains are designed and developed to perform the aforementioned analysis efficiently compared to time consuming detailed system simulations. In addition, the effects of the Long Term Evolution (LTE) transport network (S1and X2 interfaces) on the end user performance are investigated and analysed by introducing a novel comprehensive MAC scheduling scheme and a novel transport service differentiation model

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Congestion control schemes for single and parallel TCP flows in high bandwidth-delay product networks

    Get PDF
    In this work, we focus on congestion control mechanisms in Transmission Control Protocol (TCP) for emerging very-high bandwidth-delay product networks and suggest several congestion control schemes for parallel and single-flow TCP. Recently, several high-speed TCP proposals have been suggested to overcome the limited throughput achievable by single-flow TCP by modifying its congestion control mechanisms. In the meantime, users overcome the throughput limitations in high bandwidth-delay product networks by using multiple parallel TCP flows, without modifying TCP itself. However, the evident lack of fairness between the high-speed TCP proposals (or parallel TCP) and existing standard TCP has increasingly become an issue. In many scenarios where flows require high throughput, such as grid computing or content distribution networks, often multiple connections go to the same or nearby destinations and tend to share long portions of paths (and bottlenecks). In such cases benefits can be gained by sharing congestion information. To take advantage of this additional information, we first propose a collaborative congestion control scheme for parallel TCP flows. Although the use of parallel TCP flows is an easy and effective way for reliable high-speed data transfer, parallel TCP flows are inherently unfair with respect to single TCP flows. In this thesis we propose, implement, and evaluate a natural extension for aggregated aggressiveness control in parallel TCP flows. To improve the effectiveness of single TCP flows over high bandwidth-delay product networks without causing fairness problems, we suggest a new TCP congestion control scheme that effectively and fairly utilizes high bandwidth-delay product networks by adaptively controlling the flowÂs aggressiveness according to network situations using a competition detection mechanism. We argue that competition detection is more appropriate than congestion detection or bandwidth estimation. We further extend the adaptive aggressiveness control mechanism and the competition detection mechanism from single flows to parallel flows. In this way we achieve adaptive aggregated aggressiveness control. Our evaluations show that the resulting implementation is effective and fair. As a result, we show that single or parallel TCP flows in end-hosts can achieve high performance over emerging high bandwidth-delay product networks without requiring special support from networks or modifications to receivers

    Survey on end to end congestion control techniques in different network scenarios

    Get PDF
    Most of the traffic on the Internet is depend upon the Transmission Control Protocol (TCP), so the performance of TCP is directly related to Internet. Many TCP variants are developed and modified according to the environment and communication needs. Most of current TCP variants have set of algorithms which control the congestion in critical situations and maintain the throughput and efficiency of network. Now a day’s TCP is facing fast growth of Internet with the demands of faster data communication techniques on high speed links. In last 15 years many computer systems and cellular networks become linked together with protocol stack used in TCP. TCP variants with different congestion control techniques are working in different operating systems but a very small number of techniques are able to minimize the congestion in the network. This paper presents a survey on end-to-end congestion control techniques used in different TCP versions. The main purpose of this study is to review the characteristics and behavior of TCP variants with different techniques to control the congestion in the different network scenarios

    Dynamic Time Windows and Generalized Virtual Clocks-Combined Closed-Loop/Open-Loop Mechanisms for Congestion Control of Data Traffic in High Speed Wide Area Networks

    Get PDF
    This paper presents a set of mechanisms for congestion control of data traffic in high speed wide area networks (HSWANs) along with preliminary performance results. The model of the network assumes reservation of resources based on average requirements. The mechanisms address (a) the different network time constants (short term and medium-term), (b) admission control that allows controlled variance of traffic as a function of medium-term congestion, and (c) prioritized scheduling which is based on a new fairness criterion. This latter criterion is perceived as the appropriate fairness measure for HSWANs. Preliminary performance studies show that the queue length statistics at switching nodes (mean, variance and max) are approximately proportional to the end-point \u27time window\u27 size. Further, * when network utilization approaches unity, the time window mechanism can protect the network from buffer overruns and excessive queueing delays, and * when network utilization level is smaller, the time window may be increased to allow a controlled amount of variance that attempts to simultaneously meet the performance goals of the end-user and that of the network. The prioritized scheduling algorithms proposed and studied in this paper are a generalization of the Virtual Clock algorithm [Zhang 1989]. The study here investigates * necessary and sufficient conditions for accomplishing desired fairness, * simulation and (limited analytical results for expected waiting times, * ability to protect against misbehaving users, and * relationship between end-point admission control (Time-Window) and internal scheduling (\u27Pulse\u27 and Virtual Clock) at the switch

    Internet Service via Broadband Satellite Networks

    Get PDF
    The demand for Internet bandwidth has grown rapidly in the past few years. A new generation of broadband satellite constellations promises to provide high speed Internet connectivity to areas not served by optical fiber, cable or other high speed terrestrial connections. However, using satellitelinks to supply high bandwidth has been difficult due to problems with inefficient performance of the Internet's TCP/IP protocol suite over satellite. We describe an architecture for improving the performance of TCP/IP protocols over heterogeneous network environments, especially networks containing satellite links. The end-to-end connection is split into segments, and the protocol on the satellite segment is optimized for the satellite link characteristics. TCP congestion control mechanisms are maintained on each segment, with some coupling between the segments to produce the effect of end-to-end TCP flow control. We have implemented this design and present results showing that using such gateways can improve throughput for individual connections by a large factor over paths containing a satellite link.The research and scientific content in this material has been published in the Proceedings of the SPIE, vol. 3528, February 1999, 169-180.</Center

    Congestion control algorithms of TCP in emerging networks

    Get PDF
    In this dissertation we examine some of the challenges faced by the congestion control algorithms of TCP in emerging networks. We focus on three main issues. First, we propose TCP with delayed congestion response (TCP-DCR), for improving performance in the presence of non-congestion events. TCP-DCR delays the conges- tion response for a short interval of time, allowing local recovery mechanisms to handle the event, if possible. If at the end of the delay, the event persists, it is treated as congestion loss. We evaluate TCP-DCR through analysis and simulations. Results show significant performance improvements in the presence of non-congestion events with marginal impact in their absence. TCP-DCR maintains fairness with standard TCP variants that respond immediately. Second, we propose Layered TCP (LTCP), which modifies a TCP flow to behave as a collection of virtual flows (or layers), to improve eficiency in high-speed networks. The number of layers is determined by dynamic network conditions. Convergence properties and RTT-unfairness are maintained similar to that of TCP. We provide the intuition and the design for the LTCP protocol and evaluation results based on both simulations and Linux implementation. Results show that LTCP is about an order of magnitude faster than TCP in utilizing high bandwidth links while maintaining promising convergence properties. Third, we study the feasibility of employing congestion avoidance algorithms in TCP. We show that end-host based congestion prediction is more accurate than previously characterized. However, uncertainties in congestion prediction may be un- avoidable. To address these uncertainties, we propose an end-host based mechanism called Probabilistic Early Response TCP (PERT). PERT emulates the probabilistic response function of the router-based scheme RED/ECN in the congestion response function of the end-host. We show through extensive simulations that, similar to router-based RED/ECN, PERT provides fair bandwidth sharing with low queuing delays and negligible packet losses, without requiring the router support. It exhibits better characteristics than TCP-Vegas, the illustrative end-host scheme. PERT can also be used for emulating other router schemes. We illustrate this through prelim- inary results for emulating the router-based mechanism REM/ECN. Finally, we show the interactions and benefits of combining the different proposed mechanisms

    Encrypted Network Traffic Classification and Resource Allocation with Deep Learning in Software Defined Network

    Get PDF
    The climate has changed absolutely in every area in just a few years as digitized, making high-speed internet service a significant need in the future. Future Internet is supposed to face exponential growth in traffic, and highly complicated infrastructure, threatening to make conventional NTC approaches unreliable and even counterproductive. In recent days, AI Stimulated state-of-the-art breakthroughs with the ability to tackle extensive and multifarious challenges, and the network community is initiated by considering the NTC prototype from legacy rule-based towards a novel AI-based. Design and execution are applied to interdisciplinary become more essential. A smart home network supports various applications and smart devices within the proposed work, including e-health devices, regular computing devices, and home automation devices. Many devices accessible through the Internet by Home GateWay for Congestion (HGC) in a smart home. Throughout this paper, a Software-Defined Network Home GateWay for Congestion (SDNHGC) architecture for improved management of remote smart home networks and protection of the significant networks SDN controller. It enables effective network capacity regulation, focused on real-time traffic analysis and core network resource allocation. It cannot control the Network in dispersed smart homes. Our innovative SDNHGC expands power across the connectivity network, a smart home network enabling improved end-to-end monitoring of networks. The planned SDNHGC directly will gain centralized device identification by classifying traffic through a smart home network. Several of the current traffic classifications approach, checking deep packets, cannot have this real-time device knowledge for encrypted data to solve this issue

    Comments on Proposed Transport Protocols

    Get PDF
    Over the last few years, a number of research groups have made considerable progress on the design of high speed networks- on the order of a few hundred Mbps to the few Gbps. The emphasis of this work has been on the design of packet switches and on the design of network access protocols. However, this work has not yet addressed the internetworking and transport level issues in the high speed internet. As part of our effort on the design of VHSI model, we considered the appropriateness of recently proposed transport protocols, NETBLT and VMTP, as candidates for the transport protocol for our VHSI model. The summary of the results of the study is that NETBLT and VMTP have contributed a number of interesting ideas to the design of transport protocols, and they do improve upon TCP within the current Internet model for the applications they were originally designed for. However, we believe that these protocols are not appropriate solutions for the VHSI model, because the underlying assumptions and trade-offs that these protocols are based on the very different in the VHSI model. For example, the VHSI model assumes a quasi-reliable connection-oriented internet protocol (as opposed to the current unreliable datagram IP), which can make performance guarantees and can ensure that the internet is congestion free (almost all of the time). Also the network speeds in the VHSI are the few order of the magnitude more than what NETBLT and VMTP assume. We argue that the transport protocols in the VHSI model should avoid end-to-end flow control as much as possible, and make the end-to-end error control application specific and independent of the end-to-end latency. In general, the transport protocols should be simpler, designed to be mostly implemented in VLSI, well integrated with the host architecture and operating system, and targeted for a specific class of applications

    Predicting TCP congestion through active and passive measurments

    Get PDF
    The Transmission Control Protocol (TCP) has proved to be a reliable transport protocol that has withstood the test of time. It is part of the TCP/IP protocol suite deployed on the Internet, and it currently supports a variety of underlying networking technologies such as Wireless, Satellite and High-Speed networks. The congestion control mechanism used by current implementation of TCP (known as TCP-Reno/new Reno) is based on the Additive Increase Multiple Decrease (AIMD) algorithm that was first introduced by Van Jacobsen in 1988[1] after the Internet experienced heavy congestion which subsequently led to a phenomenon called congestion collapse. The algorithm assumes no prior knowledge of end-to-end path conditions and blindly follows the same routine at the beginning of every connection namely, a slow start phase, a congestion avoidance phase and in the event of a lost segment reduces the transmission rate accordingly. The network will experience different conditions depending on the amount of traffic exerted on it. At times it will endure heavy load while at other times there will be small amount of traffic. In the event that the end-to-end path characteristics are known and the amount of traffic generated is predictable, the AIMD algorithm does not take advantage of that information. In this thesis we investigate ways of predicting the available bandwidth between two hosts frequently in contact with each other through the deployment of bandwidth estimation tools. We would like to explore the possibility that AIMD can take advantage of bandwidth measurements collected between these hosts
    corecore