4 research outputs found

    Managing Network Delay for Browser Multiplayer Games

    Get PDF
    Latency is one of the key performance elements affecting the quality of experience (QoE) in computer games. Latency in the context of games can be defined as the time between the user input and the result on the screen. In order for the QoE to be satisfactory the game needs to be able to react fast enough to player input. In networked multiplayer games, latency is composed of network delay and local delays. Some major sources of network delay are queuing delay and head-of-line (HOL) blocking delay. Network delay in the Internet can be even in the order of seconds. In this thesis we discuss what feasible networking solutions exist for browser multiplayer games. We conduct a literature study to analyze the Differentiated Services architecture, some salient Active Queue Management (AQM) algorithms (RED, PIE, CoDel and FQ-CoDel), the Explicit Congestion Notification (ECN) concept and network protocols for web browser (WebSocket, QUIC and WebRTC). RED, PIE and CoDel as single-queue implementations would be sub-optimal for providing low latency to game traffic. FQ-CoDel is a multi-queue AQM and provides flow separation that is able to prevent queue-building bulk transfers from notably hampering latency-sensitive flows. WebRTC Data-Channel seems promising for games since it can be used for sending arbitrary application data and it can avoid HOL blocking. None of the network protocols, however, provide completely satisfactory support for the transport needs of multiplayer games: WebRTC is not designed for client-server connections, QUIC is not designed for traffic patterns typical for multiplayer games and WebSocket would require parallel connections to mitigate the effects of HOL blocking

    Control de Congestión TCP y mecanismos AQM

    Get PDF
    En los últimos años se ha ido poniendo énfasis particularmente en la importancia del retraso sobre la capacidad. Hoy en día, nuestras redes se están volviendo más y más sensibles a la latencia debido a la proliferación de aplicaciones y servicios como el VoIP, la IPTV o el juego online donde un retardo bajo es esencial para un desempeño adecuado y una buena experiencia de usuario. La mayor parte de este retraso innecesario se debe al mal funcionamiento de algunos búferes que pueblan internet. En vez de desempeñar la tarea para la que fueron creados, absorber eventuales ráfagas de paquetes con el fin de prevenir su pérdida, hacen creer al mecanismo de control de congestión que la ruta hacia el destino actual tiene más ancho de banda que el que posee realmente. Cuando la pérdida de paquetes ocurre, si es que lo hace, es demasiado tarde y el daño en el enlace, en forma de tiempo de transmisión adicional, ya se ha producido. En este trabajo de fín de grado intentaremos arrojar luz sobre una solución específica cuyo objetivo es el de reducir el retardo extra producido por esos hinchados búferes, la Gestión Avanzada de Colas o Active Queue Management (AQM). Hemos testeado un grupo de estos algoritmos AQM junto con diferentes modificaciones del control de congestión de TCP con el fín de entender las interacciones generadas entre esos dos mecanismos, realizando simulaciones en varios escenarios caracterísiticos tales como enlaces transoceánicos o enlaces de acceso a red, entre otros.In recent years, the relevance of delay over throughput has been particularly emphasized. Nowadays our networks are getting more and more sensible to latency due to the proliferation of applications and services like VoIP, IPTV or online gaming where a low delay is essential for a proper performance and a good user experience. Most of this unnecessary delay is created by the misbehaviour of many bu ers that populate Internet. Instead of performing the task for what they were created for, absorbing eventual packet bursts to prevent loss, they deceive the sender's congestion control mechanisms into believing that the current path to the destination has more bandwidth than it really has. When the loss event occurs, if it does, it's too late and the damage on the path, in terms of additional transmission time, has been done. On this bachelor thesis we will try to throw light over an speci c solution that aims to reduce the extra delay produced by these bloated bu ers: Active Queue Management. We have tested a bunch of AQM algorithms with di erent TCP modi cations in order to understand the interactions between these two mechanisms. We performed simulations testing various characteristic scenarios like Transoceanic links or Access link scenarios, among other.Ingeniería Telemátic

    Buffer De-bloating in Wireless Access Networks

    Get PDF
    PhDExcessive buffering brings a new challenge into the networks which is known as Bufferbloat, which is harmful to delay sensitive applications. Wireless access networks consist of Wi-Fi and cellular networks. In the thesis, the performance of CoDel and RED are investigated in Wi-Fi networks with different types of traffic. Results show that CoDel and RED work well in Wi-Fi networks, due to the similarity of protocol structures of Wi-Fi and wired networks. It is difficult for RED to tune parameters in cellular networks because of the time-varying channel. CoDel needs modifications as it drops the first packet of queue and the head packet in cellular networks will be segmented. The major contribution of this thesis is that three new AQM algorithms tailored to cellular networks are proposed to alleviate large queuing delays. A channel quality aware AQM is proposed using the CQI. The proposed algorithm is tested with a single cell topology and simulation results show that the proposed algorithm reduces the average queuing delay for each user by 40% on average with TCP traffic compared to CoDel. A QoE aware AQM is proposed for VoIP traffic. Drops and delay are monitored and turned into QoE by mathematical models. The proposed algorithm is tested in NS3 and compared with CoDel, and it enhances the QoE of VoIP traffic and the average endto- end delay is reduced by more than 200 ms when multiple users with different CQI compete for the wireless channel. A random back-off AQM is proposed to alleviate the queuing delay created by video in cellular networks. The proposed algorithm monitors the play-out buffer and postpones the request of the next packet. The proposed algorithm is tested in various scenarios and it outperforms CoDel by 18% in controlling the average end-to-end delay when users have different channel conditions

    Improving video streaming experience through network measurements and analysis

    Get PDF
    Multimedia traffic dominates today’s Internet. In particular, the most prevalent traffic carried over wired and wireless networks is video. Most popular streaming providers (e.g. Netflix, Youtube) utilise HTTP adaptive streaming (HAS) for video content delivery to end-users. The power of HAS lies in the ability to change video quality in real time depending on the current state of the network (i.e. available network resources). The main goal of HAS algorithms is to maximise video quality while minimising re-buffering events and switching between different qualities. However, these requirements are opposite in nature, so striking a perfect blend is challenging, as there is no single widely accepted metric that captures user experience based on the aforementioned requirements. In recent years, researchers have put a lot of effort into designing subjectively validated metrics that can be used to map quality, re-buffering and switching behaviour of HAS players to the overall user experience (i.e. video QoE). This thesis demonstrates how data analysis can contribute in improving video QoE. One of the main characteristics of mobile networks is frequent throughput fluctuations. There are various underlying factors that contribute to this behaviour, including rapid changes in the radio channel conditions, system load and interaction between feedback loops at the different time scales. These fluctuations highlight the challenge to achieve a high video user experience. In this thesis, we tackle this issue by exploring the possibility of throughput prediction in cellular networks. The need for better throughput prediction comes from data-based evidence that standard throughput estimation techniques (e.g. exponential moving average) exhibit low prediction accuracy. Cellular networks deploy opportunistic exponential scheduling algorithms (i.e. proportional-fair) for resource allocation among mobile users/devices. These algorithms take into account a user’s physical layer information together with throughput demand. While the algorithm itself is proprietary to the manufacturer, physical layer and throughput information are exchanged between devices and base stations. Availability of this information allows for a data-driven approach for throughput prediction. This thesis utilises a machine-learning approach to predict available throughput based on measurements in the near past. As a result, a prediction accuracy with an error less than 15% in 90% of samples is achieved. Adding information from other devices served by the same base station (network-based information) further improves accuracy while lessening the need for a large history (i.e. how far to look into the past). Finally, the throughput prediction technique is incorporated to state-of-the-art HAS algorithms. The approach is validated in a commercial cellular network and on a stock mobile device. As a result, better throughput prediction helps in improving user experience up to 33%, while minimising re-buffering events by up to 85%. In contrast to wireless networks, channel characteristics of the wired medium are more stable, resulting in less prominent throughput variations. However, all traffic traverses through network queues (i.e. a router or switch), unlike in cellular networks where each user gets a dedicated queue at the base station. Furthermore, network operators usually deploy a simple first-in-first-out queuing discipline at queues. As a result, traffic can experience excessive delays due to the large queue sizes, usually deployed in order to minimise packet loss and maximise throughput. This effect, also known as bufferbloat, negatively impacts delay-sensitive applications, such as web browsing and voice. While there exist guidelines for modelling queue size, there is no work analysing its impact on video streaming traffic generated by multiple users. To answer this question, the performance of multiple videos clients sharing a bottleneck link is analysed. Moreover, the analysis is extended to a realistic case including heterogeneous round-trip-time (RTT) and traffic (i.e. web browsing). Based on experimental results, a simple two queue discipline is proposed for scheduling heterogeneous traffic by taking into account application characteristics. As a result, compared to the state-of-the-art Active Queue Management (AQM) discipline, Controlled Delay Management (CoDel), the proposed discipline decreases median Page Loading Time (PLT) of web traffic by up to 80% compared to CoDel, with no significant negative impact on video QoE
    corecore