1,173 research outputs found

    An innovative machine learning-based scheduling solution for improving live UHD video streaming quality in highly dynamic network environments

    Get PDF
    The latest advances in terms of network technologies open up new opportunities for high-end applications, including using the next generation video streaming technologies. As mobile devices become more affordable and powerful, an increasing range of rich media applications could offer a highly realistic and immersive experience to mobile users. However, this comes at the cost of very stringent Quality of Service (QoS) requirements, putting significant pressure on the underlying networks. In order to accommodate these new rich media applications and overcome their associated challenges, this paper proposes an innovative Machine Learning-based scheduling solution which supports increased quality for live omnidirectional (360â—¦) video streaming. The proposed solution is deployed in a highly dy-namic Unmanned Aerial Vehicle (UAV)-based environment to support immersive live omnidirectional video streaming to mobile users. The effectiveness of the proposed method is demonstrated through simulations and compared against three state-of-the-art scheduling solutions, such as: Static Prioritization (SP), Required Activity Detection Scheduler (RADS) and Frame Level Scheduler (FLS). The results show that the proposed solution outperforms the other schemes involved in terms of PSNR, throughput and packet loss rate

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions

    Lightweight testbed for machine learning evaluation in 5G networks

    Get PDF
    The adoption of Software Define Networking, Network Function Virtualization and Machine Learning will play a key role in the control and management of fifth-generation (5G) networks in order to meet the specific requirements of vertical industries and the stringent requirements of 5G. Machine learning could be applied in 5G networks to deal with issues such as traffic prediction, routing optimization and resource management. To evaluate the adoption of machine learning in 5G networks, an adequate testing environment is required. In this paper, we introduce a lightweight testbed, which utilizes the benefits of container lightweight virtualization technology to create machine learning network functions over the well-known Mininet network emulator. As a use case of this testbed, we present an experimental real-time bandwidth prediction using the Long Short Term Memory recurrent neural network.Peer ReviewedPostprint (published version

    Framework for Virtualized Network Functions (VNFs) in Cloud of Things Based on Network Traffic Services

    Get PDF
    The cloud of things (CoT), which combines the Internet of Things (IoT) and cloud computing, may offer Virtualized Network Functions (VNFs) for IoT devices on a dynamic basis based on service-specific requirements. Although the provisioning of VNFs in CoT is described as an online decision-making problem, most widely used techniques primarily focus on defining the environment using simple models in order to discover the optimum solution. This leads to inefficient and coarse-grained provisioning since the Quality of Service (QoS) requirements for different types of CoT services are not considered, and important historical experience on how to provide for the best long-term benefits is disregarded. This paper suggests a methodology for providing VNFs intelligently in order to schedule adaptive CoT resources in line with the detection of traffic from diverse network services. The system makes decisions based on Deep Reinforcement Learning (DRL) based models that take into account the complexity of network configurations and traffic changes. To obtain stable performance in this model, a special surrogate objective function and a policy gradient DRL method known as Policy Optimisation using Kronecker-Factored Trust Region (POKTR) are utilised. The assertion that our strategy improves CoT QoS through real-time VNF provisioning is supported by experimental results. The POKTR algorithm-based DRL-based model maximises throughput while minimising network congestion compared to earlier DRL algorithms
    • …
    corecore