54 research outputs found

    Congestion mitigation in LTE base stations using radio resource allocation techniques with TCP end to end transport

    Get PDF
    As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory. As of 2019, Long Term Evolution (LTE) is the chosen standard for most mobile and fixed wireless data communication. The next generation of standards known as 5G will encompass the Internet of Things (IoT) which will add more wireless devices to the network. Due to an exponential increase in the number of wireless subscriptions, in the next few years there is also an expected exponential increase in data traffic. Most of these devices will use Transmission Control Protocol (TCP) which is a type of network protocol for delivering internet data to users. Due to its reliability in delivering data payload to users and congestion management, TCP is the most common type of network protocol used. However, the ability for TCP to combat network congestion has certain limitations especially in a wireless network. This is due to wireless networks not being as reliable as fixed line networks for data delivery because of the use of last mile radio interface. LTE uses various error correction techniques for reliable data delivery over the air-interface. These cause other issues such as excessive latency and queuing in the base station leading to degradation in throughput for users and congestion in the network. Traditional methods of dealing with congestion such as tail-drop can be inefficient and cumbersome. Therefore, adequate congestion mitigation mechanisms are required. The LTE standard uses a technique to pre-empt network congestion by a mechanism known as Discard Timer. Additionally, there are other algorithms such as Random Early Detection (RED) that also are used for network congestion mitigation. However, these mechanisms rely on configured parameters and only work well within certain regions of operation. If the parameters are not set correctly then the TCP links can experience congestion collapse. In this thesis, the limitations of using existing LTE congestion mitigation mechanisms such as Discard Timer and RED have been explored. A different mechanism to analyse the effects of using control theory for congestion mitigation has been developed. Finally, congestion mitigation in LTE networks has been addresses using radio resource allocation techniques with non-cooperative game theory being an underlying mathematical framework. In doing so, two key end-to-end performance measurements considered for measuring congestion for the game theoretic models were identified which were the total end-to-end delay and the overall throughput of each individual TCP link. An end to end wireless simulator model with the radio access network using LTE and a TCP based backbone to the end server was developed using MATLAB. This simulator was used as a baseline for testing each of the congestion mitigation mechanisms. This thesis also provides a comparison and performance evaluation between the congestion mitigation models developed using existing techniques (such as Discard Timer and RED), control theory and game theory

    Network Traffic Control Design and Evaluation

    Get PDF
    Recently, the term bufferbloat has been coined to indicate the uncontrolled growth of the network queueing time. A number of network traffic control strategies have been proposed to control network queueing delay. Active Queue Management (AQM) algorithms such as RED, CoDel and PIE have been proposed to drop packets before the network queues become full and to notify upper layers, e.g., transport protocols, about possible congestion status. Innovative packet schedulers such as FQ-CoDel, have been introduced to prioritize flows which do not build queues. Strategies to reduce device buffering, e.g., BQL, have been proposed to increase the effectiveness of packet schedulers. Network experimentation through simulators such as ns-3, one of the most used network simulators, allows the study of bufferbloat and to evaluate solutions in a controlled environment. In this work, we aligned the ns-3 queueing system to the Linux one, one of the most used networking stacks. We introduced in ns-3 a traffic control module modelled after the Linux one. Our design allowed the introduction in ns-3 of schedulers such as FQ-CoDel and of algorithms to dynamically size the buffers such as BQL. Also, we devised a new emulation methodology to overcome some limitations and increase the emulation fidelity. Then, by using the new emulation methodology, we validated the traffic control module with its AQM algorithms (RED, CoDel, FQ-CoDel and PIE). Our experiments prove the high fidelity of network emulation and the high accuracy of the traffic control module and AQM algorithms. Then, we show two proposals of design and evaluation of traffic control strategies by using ns-3. Firstly, we designed and evaluated a traffic control layer for the backlog management in 3GPP stacks. The approach improves significantly the flows performance in LTE networks. Secondly, we highlighted possible design flaws in rate based AQM algorithms and proposed an alternative flow control approach. The approach allows the improvement of the effectiveness of AQM algorithms. Our work will allow researchers to design and evaluate in a more accurate manner traffic control strategies through ns-3 based simulation and emulation and to evaluate the accuracy of other modules implemented in ns-3

    Downstream Bandwidth Management for Emerging DOCSIS-based Networks

    Get PDF
    In this dissertation, we consider the downstream bandwidth management in the context of emerging DOCSIS-based cable networks. The latest DOCSIS 3.1 standard for cable access networks represents a significant change to cable networks. For downstream, the current 6 MHz channel size is replaced by a much larger 192 MHz channel which potentially can provide data rates up to 10 Gbps. Further, the current standard requires equipment to support a relatively new form of active queue management (AQM) referred to as delay-based AQM. Given that more than 50 million households (and climbing) use cable for Internet access, a clear understanding of the impacts of bandwidth management strategies used in these emerging networks is crucial. Further, given the scope of the change provided by emerging cable systems, now is the time to develop and introduce innovative new methods for managing bandwidth. With this motivation, we address research questions pertaining to next generation of cable access networks. The cable industry has had to deal with the problem of a small number of subscribers who utilize the majority of network resources. This problem will grow as access rates increase to gigabits per second. Fundamentally this is a problem on how to manage data flows in a fair manner and provide protection. A well known performance issue in the Internet, referred to as bufferbloat, has received significant attention recently. High throughput network flows need sufficiently large buffer to keep the pipe full and absorb occasional burstiness. Standard practice however has led to equipment offering very large unmanaged buffers that can result in sustained queue levels increasing packet latency. One reason why these problems continue to plague cable access networks is the desire for low complexity and easily explainable (to access network subscribers and to the Federal Communications Commission) bandwidth management. This research begins by evaluating modern delay-based AQM algorithms in downstream DOCSIS 3.0 environments with a focus on fairness and application performance capabilities of single queue AQMs. We are especially interested in delay-based AQM schemes that have been proposed to combat the bufferbloat problem. Our evaluation involves a variety of scenarios that include tiered services and application workloads. Based on our results, we show that in scenarios involving realistic workloads, modern delay-based AQMs can effectively mitigate bufferbloat. However they do not address the other problem related to managing the fairness. To address the combined problem of fairness and bufferbloat, we propose a novel approach to bandwidth management that provides a compromise among the conflicting requirements. We introduce a flow quantization method referred to as adaptive bandwidth binning where flows that are observed to consume similar levels of bandwidth are grouped together with the system managed through a hierarchical scheduler designed to approximate weighted fairness while addressing bufferbloat. Based on a simulation study that considers many system experimental parameters including workloads and network configurations, we provide evidence of the efficacy of the idea. Our results suggest that the scheme is able to provide long term fairness and low delay with a performance close to that of a reference approach based on fair queueing. A further contribution is our idea for replacing `tiered\u27 levels of service based on service rates with tiering based on weights. The application of our bandwidth binning scheme offers a timely and innovative alternative to broadband service that leverages the potential offered by emerging DOCSIS-based cable systems

    5GAuRA. D3.3: RAN Analytics Mechanisms and Performance Benchmarking of Video, Time Critical, and Social Applications

    Get PDF
    5GAuRA deliverable D3.3.This is the final deliverable of Work Package 3 (WP3) of the 5GAuRA project, providing a report on the project’s developments on the topics of Radio Access Network (RAN) analytics and application performance benchmarking. The focus of this deliverable is to extend and deepen the methods and results provided in the 5GAuRA deliverable D3.2 in the context of specific use scenarios of video, time critical, and social applications. In this respect, four major topics of WP3 of 5GAuRA – namely edge-cloud enhanced RAN architecture, machine learning assisted Random Access Channel (RACH) approach, Multi-access Edge Computing (MEC) content caching, and active queue management – are put forward. Specifically, this document provides a detailed discussion on the service level agreement between tenant and service provider in the context of network slicing in Fifth Generation (5G) communication networks. Network slicing is considered as a key enabler to 5G communication system. Legacy telecommunication networks have been providing various services to all kinds of customers through a single network infrastructure. In contrast, by deploying network slicing, operators are now able to partition one network into individual slices, each with its own configuration and Quality of Service (QoS) requirements. There are many applications across industry that open new business opportunities with new business models. Every application instance requires an independent slice with its own network functions and features, whereby every single slice needs an individual Service Level Agreement (SLA). In D3.3, we propose a comprehensive end-to-end structure of SLA between the tenant and the service provider of sliced 5G network, which balances the interests of both sides. The proposed SLA defines reliability, availability, and performance of delivered telecommunication services in order to ensure that right information is delivered to the right destination at right time, safely and securely. We also discuss the metrics of slicebased network SLA such as throughput, penalty, cost, revenue, profit, and QoS related metrics, which are, in the view of 5GAuRA, critical features of the agreement.Peer ReviewedPostprint (published version

    Buffer De-bloating in Wireless Access Networks

    Get PDF
    PhDExcessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and the head packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algorithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average endto- end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel. A random back-off AQM is proposed to alleviate the queuing delay created by video in cellular networks. The proposed algorithm monitors the play-out buffer and postpones the request of the next packet. The proposed algorithm is tested in various scenarios and it outperforms CoDel by 18% in controlling the average end-to-end delay when users have different channel conditions

    Improving video streaming experience through network measurements and analysis

    Get PDF
    Multimedia traffic dominates today’s Internet. In particular, the most prevalent traffic carried over wired and wireless networks is video. Most popular streaming providers (e.g. Netflix, Youtube) utilise HTTP adaptive streaming (HAS) for video content delivery to end-users. The power of HAS lies in the ability to change video quality in real time depending on the current state of the network (i.e. available network resources). The main goal of HAS algorithms is to maximise video quality while minimising re-buffering events and switching between different qualities. However, these requirements are opposite in nature, so striking a perfect blend is challenging, as there is no single widely accepted metric that captures user experience based on the aforementioned requirements. In recent years, researchers have put a lot of effort into designing subjectively validated metrics that can be used to map quality, re-buffering and switching behaviour of HAS players to the overall user experience (i.e. video QoE). This thesis demonstrates how data analysis can contribute in improving video QoE. One of the main characteristics of mobile networks is frequent throughput fluctuations. There are various underlying factors that contribute to this behaviour, including rapid changes in the radio channel conditions, system load and interaction between feedback loops at the different time scales. These fluctuations highlight the challenge to achieve a high video user experience. In this thesis, we tackle this issue by exploring the possibility of throughput prediction in cellular networks. The need for better throughput prediction comes from data-based evidence that standard throughput estimation techniques (e.g. exponential moving average) exhibit low prediction accuracy. Cellular networks deploy opportunistic exponential scheduling algorithms (i.e. proportional-fair) for resource allocation among mobile users/devices. These algorithms take into account a user’s physical layer information together with throughput demand. While the algorithm itself is proprietary to the manufacturer, physical layer and throughput information are exchanged between devices and base stations. Availability of this information allows for a data-driven approach for throughput prediction. This thesis utilises a machine-learning approach to predict available throughput based on measurements in the near past. As a result, a prediction accuracy with an error less than 15% in 90% of samples is achieved. Adding information from other devices served by the same base station (network-based information) further improves accuracy while lessening the need for a large history (i.e. how far to look into the past). Finally, the throughput prediction technique is incorporated to state-of-the-art HAS algorithms. The approach is validated in a commercial cellular network and on a stock mobile device. As a result, better throughput prediction helps in improving user experience up to 33%, while minimising re-buffering events by up to 85%. In contrast to wireless networks, channel characteristics of the wired medium are more stable, resulting in less prominent throughput variations. However, all traffic traverses through network queues (i.e. a router or switch), unlike in cellular networks where each user gets a dedicated queue at the base station. Furthermore, network operators usually deploy a simple first-in-first-out queuing discipline at queues. As a result, traffic can experience excessive delays due to the large queue sizes, usually deployed in order to minimise packet loss and maximise throughput. This effect, also known as bufferbloat, negatively impacts delay-sensitive applications, such as web browsing and voice. While there exist guidelines for modelling queue size, there is no work analysing its impact on video streaming traffic generated by multiple users. To answer this question, the performance of multiple videos clients sharing a bottleneck link is analysed. Moreover, the analysis is extended to a realistic case including heterogeneous round-trip-time (RTT) and traffic (i.e. web browsing). Based on experimental results, a simple two queue discipline is proposed for scheduling heterogeneous traffic by taking into account application characteristics. As a result, compared to the state-of-the-art Active Queue Management (AQM) discipline, Controlled Delay Management (CoDel), the proposed discipline decreases median Page Loading Time (PLT) of web traffic by up to 80% compared to CoDel, with no significant negative impact on video QoE

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF
    • …
    corecore