143 research outputs found

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques

    The cookie recipe: untangling the use of cookies in the wild

    Get PDF
    Proceeding of: 2017 Network Traffic Measurement and Analysis Conference (TMA)Users online are commonly tracked using HTTP cookies when browsing on the web. To protect their privacy, users tend to use simple tools to block the activity of HTTP cookies. However, the "block all" design of tools breaks critical web services or severely limits the online advertising ecosystem. Therefore, to ease this tension, a more nuanced strategy that discerns better the intended functionality of the HTTP cookies users encounter is required. We present the first large-scale study of the use of HTTP cookies in the wild using network traces containing more than 5.6 billion HTTP requests from real users for a period of two and a half months. We first present a statistical analysis of how cookies are used. We then analyze the structure of cookies and observe that; HTTP cookies are significantly more sophisticated than the name=value defined by the standard and assumed by researchers and developers. Based on our findings we present an algorithm that is able to extract the information included in 86% of the cookies in our dataset with an accuracy of 91.7%. Finally, we discuss the implications of our findings and provide solutions that can be used to improve the most promising privacy preserving tools.This work has been partially supported by the European Union through the H2020 TYPES (653449) and ReCRED (653417) Projects

    On the use of composite indicators for mobile communications network management in smart sustainable cities

    Get PDF
    Beyond 5G networks will be fundamental towards enabling sustainable mobile communication networks. One of the most challenging scenarios will be met in ultra-dense networks that are deployed in densely populated areas. In this particular case, mobile network operators should benefit from new assessment metrics and data science tools to ensure an effective management of their networks. In fact, incorporating architectures allowing a cognitive network management framework could simplify processes and enhance the network's performance. In this paper, we propose the use of composite indicators based on key performance indicators both as a tool for a cognitive management of mobile communications networks, as well as a metric which could successfully integrate more advanced user-centric measurements. Composite indicators can successfully synthesize and integrate large amounts of data, incorporating in a single index different metrics selected as triggers for autonomous decisions. The paper motivates and describes the use of this methodology, which is applied successfully in other areas with the aim of ranking metrics to simplify complex realities. A use case that is based on a universal mobile telecommunications system network is analyzed, due to technology simplicity and scalability, as well as the availability of key performance indicators. The use case focuses on analyzing the fairness of a network over different coverage areas as a fundamental metric in the operation and management of the networks. To this end, several ranking and visualization strategies are presented, providing examples of how to extract insights from the proposed composite indicator

    Efficient Service for Next Generation Network Slicing Architecture and Mobile Traffic Analysis Using Machine Learning Technique

    Get PDF
    The tremendous growth of mobile devices, IOT devices, applications and many other services have placed high demand on mobile and wireless network infrastructures. Much research and development of 5G mobile networks have found the way to support the huge volume of traffic, extracting of fine-gained analytics and agile management of mobile network elements, so that it can maximize the user experience. It is very challenging to accomplish the tasks as mobile networks increase the complexity, due to increases in the high volume of data penetration, devices, and applications. One of the solutions, advance machine learning techniques, can help to mitigate the large number of data and algorithm driven applications. This work mainly focus on extensive analysis of mobile traffic for improving the performance, key performance indicators and quality of service from the operations perspective. The work includes the collection of datasets and log files using different kind of tools in different network layers and implementing the machine learning techniques to analyze the datasets to predict mobile traffic activity. A wide range of algorithms were implemented to compare the analysis in order to identify the highest performance. Moreover, this thesis also discusses about network slicing architecture its use cases and how to efficiently use network slicing to meet distinct demands

    Anycast Agility: Adaptive Routing to Manage DDoS

    Get PDF
    IP Anycast is used for services such as DNS and Content Delivery Networks to provide the capacity to handle Distributed Denial-of-Service (DDoS) attacks. During a DDoS attack service operators may wish to redistribute traffic between anycast sites to take advantage of sites with unused or greater capacity. Depending on site traffic and attack size, operators may instead choose to concentrate attackers in a few sites to preserve operation in others. Previously service operators have taken these actions during attacks, but how to do so has not been described publicly. This paper meets that need, describing methods to use BGP to shift traffic when under DDoS that can build a "response playbook". Operators can use this playbook, with our new method to estimate attack size, to respond to attacks. We also explore constraints on responses seen in an anycast deployment.Comment: 18 pages, 15 figure

    Improving Anycast with Measurements

    Get PDF
    Since the first Distributed Denial-of-Service (DDoS) attacks were launched, the strength of such attacks has been steadily increasing, from a few megabits per second to well into the terabit/s range. The damage that these attacks cause, mostly in terms of financial cost, has prompted researchers and operators alike to investigate and implement mitigation strategies. Examples of such strategies include local filtering appliances, Border Gateway Protocol (BGP)-based blackholing and outsourced mitigation in the form of cloud-based DDoS protection providers. Some of these strategies are more suited towards high bandwidth DDoS attacks than others. For example, using a local filtering appliance means that all the attack traffic will still pass through the owner's network. This inherently limits the maximum capacity of such a device to the bandwidth that is available. BGP Blackholing does not have such limitations, but can, as a side-effect, cause service disruptions to end-users. A different strategy, that has not attracted much attention in academia, is based on anycast. Anycast is a technique that allows operators to replicate their service across different physical locations, while keeping that service addressable with just a single IP-address. It relies on the BGP to effectively load balance users. In practice, it is combined with other mitigation strategies to allow those to scale up. Operators can use anycast to scale their mitigation capacity horizontally. Because anycast relies on BGP, and therefore in essence on the Internet itself, it can be difficult for network engineers to fine tune this balancing behavior. In this thesis, we show that that is indeed the case through two different case studies. In the first, we focus on an anycast service during normal operations, namely the Google Public DNS, and show that the routing within this service is far from optimal, for example in terms of distance between the client and the server. In the second case study, we observe the root DNS, while it is under attack, and show that even though in aggregate the bandwidth available to this service exceeds the attack we observed, clients still experienced service degradation. This degradation was caused due to the fact that some sites of the anycast service received a much higher share of traffic than others. In order for operators to improve their anycast networks, and optimize it in terms of resilience against DDoS attacks, a method to assess the actual state of such a network is required. Existing methodologies typically rely on external vantage points, such as those provided by RIPE Atlas, and are therefore limited in scale, and inherently biased in terms of distribution. We propose a new measurement methodology, named Verfploeter, to assess the characteristics of anycast networks in terms of client to Point-of-Presence (PoP) mapping, i.e. the anycast catchment. This method does not rely on external vantage points, is free of bias and offers a much higher resolution than any previous method. We validated this methodology by deploying it on a testbed that was locally developed, as well as on the B root DNS. We showed that the increased \textit{resolution} of this methodology improved our ability to assess the impact of changes in the network configuration, when compared to previous methodologies. As final validation we implement Verfploeter on Cloudflare's global-scale anycast Content Delivery Network (CDN), which has almost 200 global Points-of-Presence and an aggregate bandwidth of 30 Tbit/s. Through three real-world use cases, we demonstrate the benefits of our methodology: Firstly, we show that changes that occur when withdrawing routes from certain PoPs can be accurately mapped, and that in certain cases the effect of taking down a combination of PoPs can be calculated from individual measurements. Secondly, we show that Verfploeter largely reinstates the ping to its former glory, showing how it can be used to troubleshoot network connectivity issues in an anycast context. Thirdly, we demonstrate how accurate anycast catchment maps offer operators a new and highly accurate tool to identify and filter spoofed traffic. Where possible, we make datasets collected over the course of the research in this thesis available as open access data. The two best (open) dataset awards that were awarded for these datasets confirm that they are a valued contribution. In summary, we have investigated two large anycast services and have shown that their deployments are not optimal. We developed a novel measurement methodology, that is free of bias and is able to obtain highly accurate anycast catchment mappings. By implementing this methodology and deploying it on a global-scale anycast network we show that our method adds significant value to the fast-growing anycast CDN industry and enables new ways of detecting, filtering and mitigating DDoS attacks

    Lightweight, General Inference of Streaming Video Quality from Encrypted Traffic

    Get PDF
    Accurately monitoring application performance is becoming more important for Internet Service Providers (ISPs), as users increasingly expect their networks to consistently deliver acceptable application quality. At the same time, the rise of end-to-end encryption makes it difficult for network operators to determine video stream quality-including metrics such as startup delay, resolution, rebuffering, and resolution changes-directly from the traffic stream. This paper develops general methods to infer streaming video quality metrics from encrypted traffic using lightweight features. Our evaluation shows that our models are not only as accurate as previous approaches , but they also generalize across multiple popular video services, including Netflix, YouTube, Amazon Instant Video, and Twitch. The ability of our models to rely on lightweight features points to promising future possibilities for implementing such models at a variety of network locations along the end-to-end network path, from the edge to the core

    Engage D2.2 Final Communication and Dissemination Report

    Get PDF
    This deliverable reports on the communication and dissemination activities carried out by the Engage consortium over the duration of the network. Planned activities have been adapted due to the Covid-19 pandemic, however a full programme of workshops and summer schools has been organised. Support has been given to the annual SESAR Innovation Days conference and there has been an Engage presence at many other events. The Engage website launched in the first month of the network. This was later joined by the Engage ‘knowledge hub’, known as the EngageWiki, which hosts ATM research and knowledge. The wiki provides a platform and consolidated repository with novel user functionality, as well as an additional channel for the dissemination of SESAR results. Engage has also supported and publicised numerous research outputs produced by PhD candidates and catalyst fund projects

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Fifteenth Biennial Status Report: March 2019 - February 2021

    Get PDF
    • 

    corecore