2,374 research outputs found

    Analysis of Multiple Flows using Different High Speed TCP protocols on a General Network

    Full text link
    We develop analytical tools for performance analysis of multiple TCP flows (which could be using TCP CUBIC, TCP Compound, TCP New Reno) passing through a multi-hop network. We first compute average window size for a single TCP connection (using CUBIC or Compound TCP) under random losses. We then consider two techniques to compute steady state throughput for different TCP flows in a multi-hop network. In the first technique, we approximate the queues as M/G/1 queues. In the second technique, we use an optimization program whose solution approximates the steady state throughput of the different flows. Our results match well with ns2 simulations.Comment: Submitted to Performance Evaluatio

    Asymptotic Approximations for TCP Compound

    Full text link
    In this paper, we derive an approximation for throughput of TCP Compound connections under random losses. Throughput expressions for TCP Compound under a deterministic loss model exist in the literature. These are obtained assuming the window sizes are continuous, i.e., a fluid behaviour is assumed. We validate this model theoretically. We show that under the deterministic loss model, the TCP window evolution for TCP Compound is periodic and is independent of the initial window size. We then consider the case when packets are lost randomly and independently of each other. We discuss Markov chain models to analyze performance of TCP in this scenario. We use insights from the deterministic loss model to get an appropriate scaling for the window size process and show that these scaled processes, indexed by p, the packet error rate, converge to a limit Markov chain process as p goes to 0. We show the existence and uniqueness of the stationary distribution for this limit process. Using the stationary distribution for the limit process, we obtain approximations for throughput, under random losses, for TCP Compound when packet error rates are small. We compare our results with ns2 simulations which show a good match.Comment: Longer version for NCC 201

    Performance and Analysis of Transfer Control Protocol Over Voice Over Wireless Local Area Network

    Get PDF
    A thesis presented to the faculty of the College of Science and Technology at Morehead State University in partial fulfillment of the requirements for the Degree Master of Science by Rajendra Patil in August of 2008

    Modelling and Analysis of TCP Performance in Wireless Multihop Networks

    Get PDF
    Researchers have used extensive simulation and experimental studies to understand TCP performance in wireless multihop networks. In contrast, the objective of this paper is to theoretically analyze TCP performance in this environment. By examining the case of running one TCP session over a string topology, a system model for analyzing TCP performance in multihop wireless networks is proposed, which considers packet buffering, contention of nodes for access to the wireless channel, and spatial reuse of the wireless channel. Markov chain modelling is applied to analyze this system model. Analytical results show that when the number of hops that the TCP session crosses is fixed, the TCP throughput is independent of the TCP congestion window size. When the number of hops increases from one, the TCP throughput decreases first, and then stabilizes when the number of hops becomes large. The analysis is validated by comparing the numerical and simulation result

    Flow Control in Wireless Ad-hoc Networks

    Get PDF
    We are interested in maximizing the Transmission Control Protocol (TCP) throughput between two nodes in a single cell wireless ad-hoc network. For this, we follow a cross-layer approach by first developing an analytical model that captures the effect of the wireless channel and the MAC layer to TCP. The analytical model gives the time evolution of the TCP window size which is described by a stochastic differential equation driven by a point process. The point process represents the arrival of acknowledgments sent by the TCP receiver to the sender as part of the self-regulating mechanism of the flow control protocol. Through this point process we achieve a cross-layer integration between the physical layer, the MAC layer and TCP. The intervals between successive points describe how the packet drops at the wireless channel and the delays because of retransmission at the MAC layer affect the window size at the TCP layer. We fully describe the statistical behavior of the point process by computing first the p.d.f. for the inter-arrival intervals and then the compensator and the intensity of the process parametrized by the quantities that describe the MAC layer and the wireless channel. To achieve analytical tractability we concentrate on the pure (unslotted) Aloha for the MAC layer and the Gilbert-Elliott model for the channel. Although the Aloha protocol is simpler than the more popular IEEE 802.11 protocol, it still exhibits the same exponential backoff mechanism which is a key factor for the performance of TCP in a wireless network. Moreover, another reason to study the Aloha protocol is that the protocol and its variants gain popularity as they are used in many of today's wireless networks. Using the analytical model for the TCP window size evolution, we try to increase the TCP throughput between two nodes in a single cell network. We want to achieve this by implicitly informing the TCP sender of the network conditions. We impose this additional constraint so we can achieve compatibility between the standard TCP and the optimized version. This allows the operation of both protocol stacks in the same network. We pose the optimization problem as an optimal stopping problem. For each packet transmitted by the TCP sender to the network, an optimal time instance has to be computed in the absence of an acknowledgment for this packet. This time instance indicates when a timeout has to be declared for the packet. In the absence of an acknowledgment, if the sender waits long for declaring a timeout, the network is underutilized. If the sender declares a timeout soon, it minimizes the transmission rate. Because of the analytical intractability of the optimal stopping time problem, we follow a Markov chain approximation method to solve the problem numerically

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Non-convex resource allocation in communication networks

    Get PDF
    The continuously growing number of applications competing for resources in current communication networks highlights the necessity for efficient resource allocation mechanisms to maximize user satisfaction. Optimization Theory can provide the necessary tools to develop such mechanisms that will allocate network resources optimally and fairly among users. However, the resource allocation problem in current networks has characteristics that turn the respective optimization problem into a non-convex one. First, current networks very often consist of a number of wireless links, whose capacity is not constant but follows Shannon capacity formula, which is a non-convex function. Second, the majority of the traffic in current networks is generated by multimedia applications, which are non-concave functions of rate. Third, current resource allocation methods follow the (bandwidth) proportional fairness policy, which when applied to networks shared by both concave and non-concave utilities leads to unfair resource allocations. These characteristics make current convex optimization frameworks inefficient in several aspects. This work aims to develop a non-convex optimization framework that will be able to allocate resources efficiently for non-convex resource allocation formulations. Towards this goal, a necessary and sufficient condition for the convergence of any primal-dual optimization algorithm to the optimal solution is proven. The wide applicability of this condition makes this a fundamental contribution for Optimization Theory in general. A number of optimization formulations are proposed, cases where this condition is not met are analysed and efficient alternative heuristics are provided to handle these cases. Furthermore, a novel multi-sigmoidal utility shape is proposed to model user satisfaction for multi-tiered multimedia applications more accurately. The advantages of such non-convex utilities and their effect in the optimization process are thoroughly examined. Alternative allocation policies are also investigated with respect to their ability to allocate resources fairly and deal with the non-convexity of the resource allocation problem. Specifically, the advantages of using Utility Proportional Fairness as an allocation policy are examined with respect to the development of distributed algorithms, their convergence to the optimal solution and their ability to adapt to the Quality of Service requirements of each application

    Control of transport dynamics in overlay networks

    Get PDF
    Transport control is an important factor in the performance of Internet protocols, particularly in the next generation network applications involving computational steering, interactive visualization, instrument control, and transfer of large data sets. The widely deployed Transport Control Protocol is inadequate for these tasks due to its performance drawbacks. The purpose of this dissertation is to conduct a rigorous analytical study on the design and performance of transport protocols, and systematically develop a new class of protocols to overcome the limitations of current methods. Various sources of randomness exist in network performance measurements due to the stochastic nature of network traffic. We propose a new class of transport protocols that explicitly accounts for the randomness based on dynamic stochastic approximation methods. These protocols use congestion window and idle time to dynamically control the source rate to achieve transport objectives. We conduct statistical analyses to determine the main effects of these two control parameters and their interaction effects. The application of stochastic approximation methods enables us to show the analytical stability of the transport protocols and avoid pre-selecting the flow and congestion control parameters. These new protocols are successfully applied to transport control for both goodput stabilization and maximization. The experimental results show the superior performance compared to current methods particularly for Internet applications. To effectively deploy these protocols over the Internet, we develop an overlay network, which resides at the application level to provide data transmission service using User Datagram Protocol. The overlay network, together with the new protocols based on User Datagram Protocol, provides an effective environment for implementing transport control using application-level modules. We also study problems in overlay networks such as path bandwidth estimation and multiple quickest path computation. In wireless networks, most packet losses are caused by physical signal losses and do not necessarily indicate network congestion. Furthermore, the physical link connectivity in ad-hoc networks deployed in unstructured areas is unpredictable. We develop the Connectivity-Through-Time protocols that exploit the node movements to deliver data under dynamic connectivity. We integrate this protocol into overlay networks and present experimental results using network to support a team of mobile robots
    corecore