84,141 research outputs found

    An Efficient Framework of Congestion Control for Next-Generation Networks

    Get PDF
    The success of the Internet can partly be attributed to the congestion control algorithm in the Transmission Control Protocol (TCP). However, with the tremendous increase in the diversity of networked systems and applications, TCP performance limitations are becoming increasingly problematic and the need for new transport protocol designs has become increasingly important.Prior research has focused on the design of either end-to-end protocols (e.g., CUBIC) that rely on implicit congestion signals such as loss and/or delay or network-based protocols (e.g., XCP) that use precise per-flow feedback from the network. While the former category of schemes haveperformance limitations, the latter are hard to deploy, can introduce high per-packet overhead, and open up new security challenges. This dissertation explores the middle ground between these designs and makes four contributions. First, we study the interplay between performance and feedback in congestion control protocols. We argue that congestion feedback in the form of aggregate load can provide the richness needed to meet the challenges of next-generation networks and applications. Second, we present the design, analysis, and evaluation of an efficient framework for congestion control called Binary Marking Congestion Control (BMCC). BMCC uses aggregate load feedback to achieve efficient and fair bandwidth allocations on high bandwidth-delaynetworks while minimizing packet loss rates and average queue length. BMCC reduces flow completiontimes by up to 4x over TCP and uses only the existing Explicit Congestion Notification bits.Next, we consider the incremental deployment of BMCC. We study the bandwidth sharing properties of BMCC and TCP over different partial deployment scenarios. We then present algorithms for ensuring safe co-existence of BMCC and TCP on the Internet. Finally, we consider the performance of BMCC over Wireless LANs. We show that the time-varying nature of the capacity of a WLAN can lead to significant performance issues for protocols that require capacity estimates for feedback computation. Using a simple model we characterize the capacity of a WLAN and propose the usage of the average service rate experienced by network layer packets as an estimate for capacity. Through extensive evaluation, we show that the resulting estimates provide good performance

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Performance Evaluation of Constrained Application Protocol over TCP

    Get PDF
    The Constrained Application Protocol (CoAP) is specifically designed for constrained IoT devices and is being rapidly deployed for the communication needs of the IoT devices. CoAP has been specified with its own congestion control algorithms because it runs on top of UDP that does not include any congestion control measures. These algorithms aim at taking into account the specific needs of the IoT communication. The need of running CoAP also over TCP has arised recently and is expected to be increasingly deployed alongside with CoAP over UDP. To understand the benefits and shortcomings of both CoAP over TCP and CoAP over UDP, we run an extensive set of experiments in different network settings and compare the performance of CoAP over TCP to the existing congestion control algorithms for CoAP over UDP. Our results reveal that even though CoAP over TCP has its known limitations it scales well and performs even better than expected in certain wireless settings that CoAP over UDP algorithms are specifically designed for, often even outperforming CoAP over UDP.Peer reviewe

    Congestion mitigation in LTE base stations using radio resource allocation techniques with TCP end to end transport

    Get PDF
    As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory. As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory

    Congestion control protocols in wireless sensor networks: A survey

    Get PDF
    The performance of wireless sensor networks (WSN) is affected by the lossy communication medium, application diversity, dense deployment, limited processing power and storage capacity, frequent topology change. All these limitations provide significant and unique design challenges to data transport control in wireless sensor networks. An effective transport protocol should consider reliable message delivery, energy-efficiency, quality of service and congestion control. The latter is vital for achieving a high throughput and a long network lifetime. Despite the huge number of protocols proposed in the literature, congestion control in WSN remains challenging. A review and taxonomy of the state-of-the-art protocols from the literature up to 2013 is provided in this paper. First, depending on the control policy, the protocols are divided into resource control vs. traffic control. Traffic control protocols are either reactive or preventive (avoiding). Reactive solutions are classified following the reaction scale, while preventive solutions are split up into buffer limitation vs. interference control. Resource control protocols are classified according to the type of resource to be tuned. © 2014 IEEE

    A Performance Verification Methodology for Resource Allocation Heuristics

    Full text link
    Performance verification is a nascent but promising tool for understanding the performance and limitations of heuristics under realistic assumptions. Bespoke performance verification tools have already demonstrated their value in settings like congestion control and packet scheduling. In this paper, we aim to emphasize the broad applicability and utility of performance verification. To that end, we highlight the design principles of performance verification. Then, we leverage that understanding to develop a set of easy-to-follow guidelines that are applicable to a wide range of resource allocation heuristics. In particular, we introduce Virelay, a framework that enables heuristic designers to express the behavior of their algorithms and their assumptions about the system in an environment that resembles a discrete-event simulator. We demonstrate the utility and ease-of-use of Virelay by applying it to six diverse case studies. We produce bounds on the performance of classical algorithms, work stealing and SRPT scheduling, under practical assumptions. We demonstrate Virelay's expressiveness by capturing existing models for congestion control and packet scheduling, and we verify the observation that TCP unfairness can cause some ML training workloads to spontaneously converge to a state of high network utilization. Finally, we use Virelay to identify two bugs in the Linux CFS load balancer.Comment: 12 pages, 11 figure

    Distributed optimal congestion control and channel assignment in wireless mesh networks

    Get PDF
    Wireless mesh networks have numerous advantages in terms of connectivity as well as reliability. Traditionally the nodes in wireless mesh networks are equipped with single radio, but the limitations are lower throughput and limited use of the available wireless channel. In order to overcome this, the recent advances in wireless mesh networks are based on multi-channel multi-radio approach. Channel assignment is a technique that selects the best channel for a node or to the entire network just to increase the network capacity. To maximize the throughput and the capacity of the network, multiple channels with multiple radios were introduced in these networks. In the proposed system, algorithms are developed to improve throughput, minimise delay, reduce average energy consumption and increase the residual energy for multi radio multi-channel wireless mesh networks. In literature, the existing channel assignment algorithms fail to consider both interflow and intra flow interferences. The limitations are inaccurate bandwidth estimation, throughput degradation under heavy traffic and unwanted energy consumption during low traffic and increase in delay. In order to improve the performance of the network distributed optimal congestion control and channel assignment algorithm (DOCCA) is proposed. In this algorithm, if congestion is identified, the information is given to previous node. According to the congestion level, the node adjusts itself to minimise congestion

    Control of transport dynamics in overlay networks

    Get PDF
    Transport control is an important factor in the performance of Internet protocols, particularly in the next generation network applications involving computational steering, interactive visualization, instrument control, and transfer of large data sets. The widely deployed Transport Control Protocol is inadequate for these tasks due to its performance drawbacks. The purpose of this dissertation is to conduct a rigorous analytical study on the design and performance of transport protocols, and systematically develop a new class of protocols to overcome the limitations of current methods. Various sources of randomness exist in network performance measurements due to the stochastic nature of network traffic. We propose a new class of transport protocols that explicitly accounts for the randomness based on dynamic stochastic approximation methods. These protocols use congestion window and idle time to dynamically control the source rate to achieve transport objectives. We conduct statistical analyses to determine the main effects of these two control parameters and their interaction effects. The application of stochastic approximation methods enables us to show the analytical stability of the transport protocols and avoid pre-selecting the flow and congestion control parameters. These new protocols are successfully applied to transport control for both goodput stabilization and maximization. The experimental results show the superior performance compared to current methods particularly for Internet applications. To effectively deploy these protocols over the Internet, we develop an overlay network, which resides at the application level to provide data transmission service using User Datagram Protocol. The overlay network, together with the new protocols based on User Datagram Protocol, provides an effective environment for implementing transport control using application-level modules. We also study problems in overlay networks such as path bandwidth estimation and multiple quickest path computation. In wireless networks, most packet losses are caused by physical signal losses and do not necessarily indicate network congestion. Furthermore, the physical link connectivity in ad-hoc networks deployed in unstructured areas is unpredictable. We develop the Connectivity-Through-Time protocols that exploit the node movements to deliver data under dynamic connectivity. We integrate this protocol into overlay networks and present experimental results using network to support a team of mobile robots

    D2D-Based Grouped Random Access to Mitigate Mobile Access Congestion in 5G Sensor Networks

    Full text link
    The Fifth Generation (5G) wireless service of sensor networks involves significant challenges when dealing with the coordination of ever-increasing number of devices accessing shared resources. This has drawn major interest from the research community as many existing works focus on the radio access network congestion control to efficiently manage resources in the context of device-to-device (D2D) interaction in huge sensor networks. In this context, this paper pioneers a study on the impact of D2D link reliability in group-assisted random access protocols, by shedding the light on beneficial performance and potential limitations of approaches of this kind against tunable parameters such as group size, number of sensors and reliability of D2D links. Additionally, we leverage on the association with a Geolocation Database (GDB) capability to assist the grouping decisions by drawing parallels with recent regulatory-driven initiatives around GDBs and arguing benefits of the suggested proposal. Finally, the proposed method is approved to significantly reduce the delay over random access channels, by means of an exhaustive simulation campaign.Comment: First submission to IEEE Communications Magazine on Oct.28.2017. Accepted on Aug.18.2019. This is the camera-ready versio

    Performance Study and Enhancement of Access Barring for Massive Machine-Type Communications

    Full text link
    [EN] Machine-type communications (MTC) is an emerging technology that boosts the development of the Internet of Things by providing ubiquitous connectivity and services. Cellular networks are an excellent choice for providing such hyper-connectivity thanks to their widely deployed infrastructure, among other features. However, dealing with a large number of connection requests is a primary challenge in the cellular-based MTC. Severe congestion episodes can occur when a large number of devices try to access the network almost simultaneously. Extended access barring (EAB) is a congestion control mechanism for the MTC that has been proposed by the 3GPP. In this paper, we carry out a thorough performance analysis of the EAB and show the limitations of its current specification. To overcome these limitations, we propose the two enhanced EAB schemes: the combined use of the EAB and access class barring, and the introduction of a congestion avoidance backoff after the barring status of a UE is switched to unbarred. It is shown through extensive simulations that our proposed solutions improve the key performance indicators. A high successful access probability can be achieved even in heavily congested scenarios, the access delay is shortened, and, most importantly, the number of required preamble retransmissions is reduced, which results in significant energy savings. Furthermore, we present an accurate congestion estimation method that solely relies on the information available at the base station. We show that this method permits a realistic and effective implementation of the EAB.This work was supported in part by the Ministerio de Ciencia, Innovacion y Universidades (MCIU), Agencia Estatal de Investigacion (AEI) y Fondo Europeo de Desarrollo Regional (FEDER), UE, under Grant PGC2018-094151-B-I00, and in part by the ITACA Institute under Grant Ayudas ITACA 2019Vidal Catalá, JR.; Tello-Oquendo, L.; Pla, V.; Guijarro, L. (2019). Performance Study and Enhancement of Access Barring for Massive Machine-Type Communications. IEEE Access. 7:63745-63759. https://doi.org/10.1109/ACCESS.2019.2917618S6374563759
    corecore