11 research outputs found

    BBRp: Improving TCP BBR Performance over WLAN

    Get PDF
    This paper shows the inefficiency of TCP BBR in exploiting the Wi-Fi bandwidth. This limitation of BBR has been observed with both IEEE 802.11n and IEEE 802.11ac, where the mechanism of frame aggregation is used to boost the throughput of data transmission. In the last years, many TCP variants have been introduced to limit the bufferbloat phenomena and bound the latency through a reduction of the queue backlog injection rate. However, this mechanism impacts on the Wi-Fi frame aggregation logic, impeding TCP congestion controls to reach the full throughput potential of a Wi-Fi interface. While this problem can be solved with TCP Cubic by allowing the sender node to enqueue more packets, for TCP BBR the fix is not the same, as it has a customized pacing algorithm. With this contribution we propose BBRp, a new BBR version that allows for fine-tuning the congestion control pace, achieving between four and six times more throughput over IEEE 802.11n and IEEE 802.11ac channels, at the cost of an increased latency that is however always less than the latency obtainable with loss-based TCP congestion controls

    BBR-S:A Low-Latency BBR Modification for Fast-Varying Connections

    Get PDF

    PBE-CC: Congestion Control via Endpoint-Centric, Physical-Layer Bandwidth Measurements

    Full text link
    Wireless networks are becoming ever more sophisticated and overcrowded, imposing the most delay, jitter, and throughput damage to end-to-end network flows in today's internet. We therefore argue for fine-grained mobile endpoint-based wireless measurements to inform a precise congestion control algorithm through a well-defined API to the mobile's wireless physical layer. Our proposed congestion control algorithm is based on Physical-Layer Bandwidth measurements taken at the Endpoint (PBE-CC), and captures the latest 5G New Radio innovations that increase wireless capacity, yet create abrupt rises and falls in available wireless capacity that the PBE-CC sender can react to precisely and very rapidly. We implement a proof-of-concept prototype of the PBE measurement module on software-defined radios and the PBE sender and receiver in C. An extensive performance evaluation compares PBE-CC head to head against the leading cellular-aware and wireless-oblivious congestion control protocols proposed in the research community and in deployment, in mobile and static mobile scenarios, and over busy and quiet networks. Results show 6.3% higher average throughput than BBR, while simultaneously reducing 95th percentile delay by 1.8x

    Challenges on the way of implementing TCP over 5G networks

    Get PDF
    5G cellular communication, especially with its hugely available bandwidth provided by millimeter-wave, is a promising technology to fulfill the coming high demand for vast data rates. These networks can support new use cases such as Vehicle to Vehicle and augmented reality due to its novel features such as network slicing along with the mmWave multi-gigabit-per-second data rate. Nevertheless, 5G cellular networks suffer from some shortcomings, especially in high frequencies because of the intermittent nature of channels when the frequency rises. Non-line of sight state, is one of the significant issues that the new generation encounters. This drawback is because of the intense susceptibility of higher frequencies to blockage caused by obstacles and misalignment. This unique characteristic can impair the performance of the reliable transport layer widely deployed protocol, TCP, in attaining high throughput and low latency throughout a fair network. As a result, the protocol needs to adjust the congestion window size based on the current situation of the network. However, TCP is not able to adjust its congestion window efficiently, and it leads to throughput degradation of the protocol. This paper presents a comprehensive analysis of reliable end-to-end communications in 5G networks. It provides the analysis of the effects of TCP in 5G mmWave networks, the discussion of TCP mechanisms and parameters involved in the performance over 5G networks, and a survey of current challenges, solutions, and proposals. Finally, a feasibility analysis proposal of machine learning-based approaches to improve reliable end-to-end communications in 5G networks is presented.This work was supported by the Secretaria d’Universitats i Recerca del Departament d’Empresa i Coneixement de la Generalitat de Catalunya under Grant 2017 SGR 376.Peer ReviewedPostprint (published version

    An Experimental Evaluation of Constrained Application Protocol Performance over TCP

    Get PDF
    The Internet of Things (IoT) is the Internet augmented with diverse everyday and industrial objects, enabling a variety of services ranging from smart homes to smart cities. Because of their embedded nature, IoT nodes are typically low-power devices with many constraints, such as limited memory and computing power. They often connect to the Internet over error-prone wireless links with low or variable speed. To accommodate these characteristics, protocols specifically designed for IoT use have been designed. The Constrained Application Protocol (CoAP) is a lightweight web transfer protocol for resource manipulation. It is designed for constrained devices working in impoverished environments. By default, CoAP traffic is carried over the unreliable User Datagram Protocol (UDP). As UDP is connectionless and has little header overhead, it is well-suited for typical IoT communication consisting of short request-response exchanges. To achieve reliability on top of UDP, CoAP also implements features normally found in the transport layer. Despite the advantages, the use of CoAP over UDP may be sub-optimal in certain settings. First, some networks rate-limit or entirely block UDP traffic. Second, the default CoAP congestion control is extremely simple and unable to properly adjust its behaviour to variable network conditions, for example bursts. Finally, even IoT devices occasionally need to transfer large amounts of data, for example to perform firmware updates. For these reasons, it may prove beneficial to carry CoAP over reliable transport protocols, such as the Transmission Control Protocol (TCP). RFC 8323 specifies CoAP over stateful connections, including TCP. Currently, little research exists on CoAP over TCP performance. This thesis experimentally evaluates CoAP over TCP suitability for long-lived connections in a constrained setting, assessing factors limiting scalability and problems packet loss and high levels of traffic may cause. The experiments are performed in an emulated network, under varying levels of congestion and likelihood of errors, as well as in the presence of overly large buffers. For TCP results, both TCP New Reno and the newer TCP BBR are examined. For baseline measurements, CoAP over UDP is carried using both the default CoAP congestion control and the more advanced CoAP Simple Congestion Control/Advanced (CoCoA) congestion control. This work shows CoAP over TCP to be more efficient or at least on par with CoAP over UDP in a constrained setting when connections are long-lived. CoAP over TCP is notably more adept than CoAP over UDP at fully utilising the capacity of the link when there are no or few errors, even if the link is congested or bufferbloat is present. When the congestion level and the frequency of link errors grow high, the difference between CoAP over UDP and CoAP over TCP diminishes, yet CoAP over TCP continues to perform well, showing that in this setting CoAP over TCP is more scalable than CoAP over UDP. Finally, this thesis finds TCP BBR to be a promising congestion control candidate. It is able to outperform the older New Reno in almost all explored scenarios, most notably in the presence of bufferbloat

    Contribution to reliable end-to-end communication over 5G networks using advanced techniques

    Get PDF
    5G cellular communication, especially with its hugely available bandwidth provided by millimeter-wave, is a promising technology to fulfill the coming high demand for vast data rates. These networks can support new use cases such as Vehicle to Vehicle and augmented reality due to its novel features such as network slicing along with the mmWave multi-gigabit-persecond data rate. Nevertheless, 5G cellular networks suffer from some shortcomings, especially in high frequencies because of the intermittent nature of channels when the frequency rises. Non-line of sight state is one of the significant issues that the new generation encounters. This drawback is because of the intense susceptibility of higher frequencies to blockage caused by obstacles and misalignment. This unique characteristic can impair the performance of the reliable transport layer widely deployed protocol, TCP, in attaining high throughput and low latency throughout a fair network. As a result, the protocol needs to adjust the congestion window size based on the current situation of the network. However, TCP cannot adjust its congestion window efficiently, which leads to throughput degradation of the protocol. This thesis presents a comprehensive analysis of reliable end-to-end communications in 5G networks and analyzes TCP’s behavior in one of the 3GPP’s well-known s cenarios called urban deployment. Furtherm ore, two novel TCPs bas ed on artificial intelligence have been proposed to deal with this issue. The first protocol uses Fuzzy logic, a subset of artificial intelligence, and the second one is based on deep learning. The extensively conducted simulations showed that the newly proposed protocols could attain higher performance than common TCPs, such as BBR, HighSpeed, Cubic, and NewReno in terms of throughput, RTT, and sending rate adjustment in the urban scenario. The new protocols' superiority is achieved by employing smartness in the conges tions control mechanism of TCP, which is a powerful enabler in fos tering TCP’s functionality. To s um up, the 5G network is a promising telecommunication infrastructure that will revolute various aspects of communication. However, different parts of the Internet, such as its regulations and protocol stack, will face new challenges, which need to be solved in order to exploit 5G capacity, and without intelligent rules and protocols, the high bandwidth of 5G, especially 5G mmWave will be wasted. Two novel schemes to solve the issues have been proposed based on an Artificial Intelligence subset technique called fuzzy and a machine learning-based approach called Deep learning to enhance the performance of 5G mmWave by improving the functionality of the transport layer. The obtained results indicated that the new schemes could improve the functionality of TCP by giving intelligence to the protocol. As the protocol works more smartly, it can make sufficient decisions on different conditions.La comunicació cel·lular 5G, especialment amb l’amplada de banda molt disponible que proporciona l’ona mil·limètrica, és una tecnologia prometedora per satisfer l’elevada demanda de grans velocitats de dades. Aquestes xarxes poden admetre casos d’ús nous, com ara Vehicle to Vehicle i realitat augmentada, a causa de les seves novetats, com ara el tall de xarxa juntament amb la velocitat de dades mWave de multi-gigabit per segon. Tot i això, les xarxes cel·lulars 5G pateixen algunes deficiències, sobretot en freqüències altes a causa de la naturalesa intermitent dels canals quan augmenta la freqüència. L’estat de no visió és un dels problemes significatius que troba la nova generació. Aquest inconvenient es deu a la intensa susceptibilitat de freqüències més altes al bloqueig causat per obstacles i desalineació. Aquesta característica única pot perjudicar el rendiment del protocol TCP, àmpliament desplegat, de capa de transport fiable en aconseguir un alt rendiment i una latència baixa en tota una xarxa justa. Com a resultat, el protocol ha d’ajustar la mida de la finestra de congestió en funció de la situació actual de la xarxa. Tot i això, TCP no pot ajustar la seva finestra de congestió de manera eficient, cosa que provoca una degradació del rendiment del protocol. Aquesta tesi presenta una anàlisi completa de comunicacions extrem a extrem en xarxes 5G i analitza el comportament de TCP en un dels escenaris coneguts del 3GPP anomenat desplegament urbà. A més, s'han proposat dos TCP nous basats en intel·ligència artificial per tractar aquest tema. El primer protocol utilitza la lògica Fuzzy, un subconjunt d’intel·ligència artificial, i el segon es basa en l’aprenentatge profund. Les simulacions àmpliament realitzades van mostrar que els protocols proposats recentment podrien assolir un rendiment superior als TCP habituals, com ara BBR, HighSpeed, Cubic i NewReno, en termes de rendiment, RTT i ajust d’índex d’enviament en l’escenari urbà. La superioritat dels nous protocols s’aconsegueix utilitzant la intel·ligència en el mecanisme de control de congestions de TCP, que és un poderós facilitador per fomentar la funcionalitat de TCP. En resum, la xarxa 5G és una prometedora infraestructura de telecomunicacions que revolucionarà diversos aspectes de la comunicació. No obstant això, diferents parts d’Internet, com ara les seves regulacions i la seva pila de protocols, s’enfrontaran a nous reptes, que cal resoldre per explotar la capacitat 5G, i sens regles i protocols intel·ligents, l’amplada de banda elevada de 5G, especialment 5G mmWave, pot ser desaprofitat. S'han proposat dos nous es quemes per resoldre els problemes basats en una tècnica de subconjunt d'Intel·ligència Artificial anomenada “difusa” i un enfocament basat en l'aprenentatge automàtic anomenat “Aprenentatge profund” per millorar el rendiment de 5G mmWave, millorant la funcionalitat de la capa de transport. Els resultats obtinguts van indicar que els nous esquemes podrien millorar la funcionalitat de TCP donant intel·ligència al protocol. Com que el protocol funciona de manera més intel·ligent, pot prendre decisions suficients en diferents condicionsPostprint (published version
    corecore