294 research outputs found

    A Cognitive Routing framework for Self-Organised Knowledge Defined Networks

    Get PDF
    This study investigates the applicability of machine learning methods to the routing protocols for achieving rapid convergence in self-organized knowledge-defined networks. The research explores the constituents of the Self-Organized Networking (SON) paradigm for 5G and beyond, aiming to design a routing protocol that complies with the SON requirements. Further, it also exploits a contemporary discipline called Knowledge-Defined Networking (KDN) to extend the routing capability by calculating the “Most Reliable” path than the shortest one. The research identifies the potential key areas and possible techniques to meet the objectives by surveying the state-of-the-art of the relevant fields, such as QoS aware routing, Hybrid SDN architectures, intelligent routing models, and service migration techniques. The design phase focuses primarily on the mathematical modelling of the routing problem and approaches the solution by optimizing at the structural level. The work contributes Stochastic Temporal Edge Normalization (STEN) technique which fuses link and node utilization for cost calculation; MRoute, a hybrid routing algorithm for SDN that leverages STEN to provide constant-time convergence; Most Reliable Route First (MRRF) that uses a Recurrent Neural Network (RNN) to approximate route-reliability as the metric of MRRF. Additionally, the research outcomes include a cross-platform SDN Integration framework (SDN-SIM) and a secure migration technique for containerized services in a Multi-access Edge Computing environment using Distributed Ledger Technology. The research work now eyes the development of 6G standards and its compliance with Industry-5.0 for enhancing the abilities of the present outcomes in the light of Deep Reinforcement Learning and Quantum Computing

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    View on 5G Architecture: Version 2.0

    Get PDF
    The 5G Architecture Working Group as part of the 5GPPP Initiative is looking at capturing novel trends and key technological enablers for the realization of the 5G architecture. It also targets at presenting in a harmonized way the architectural concepts developed in various projects and initiatives (not limited to 5GPPP projects only) so as to provide a consolidated view on the technical directions for the architecture design in the 5G era. The first version of the white paper was released in July 2016, which captured novel trends and key technological enablers for the realization of the 5G architecture vision along with harmonized architectural concepts from 5GPPP Phase 1 projects and initiatives. Capitalizing on the architectural vision and framework set by the first version of the white paper, this Version 2.0 of the white paper presents the latest findings and analyses with a particular focus on the concept evaluations, and accordingly it presents the consolidated overall architecture design

    White Paper for Research Beyond 5G

    Get PDF
    The documents considers both research in the scope of evolutions of the 5G systems (for the period around 2025) and some alternative/longer term views (with later outcomes, or leading to substantial different design choices). This document reflects on four main system areas: fundamental theory and technology, radio and spectrum management; system design; and alternative concepts. The result of this exercise can be broken in two different strands: one focused in the evolution of technologies that are already ongoing development for 5G systems, but that will remain research areas in the future (with “more challenging” requirements and specifications); the other, highlighting technologies that are not really considered for deployment today, or that will be essential for addressing problems that are currently non-existing, but will become apparent when 5G systems begin their widespread deployment

    Towards Zero Touch Next Generation Network Management

    Get PDF
    The current trend in user services places an ever-growing demand for higher data rates, near-real-time latencies, and near-perfect quality of service. To meet such demands, fundamental changes were made to the front and mid-haul and backbone networking segments servicing them. One of the main changes made was virtualizing the networking components to allow for faster deployment and reconfiguration when needed. However, adopting such technologies poses several challenges, such as improving the performance and efficiency of these systems by properly orchestrating the services to the ideal edge device. A second challenge is ensuring the backbone optical networking maximizes and maintains the throughput levels under more dynamically variant conditions. A third challenge is addressing the limitation of placement techniques in O-RAN. In this thesis, we propose using various optimization modeling and machine learning techniques in three segments of network systems towards lowering the need for human intervention targeting zero-touch networking. In particular, the first part of the thesis applies optimization modeling, heuristics, and segmentation to improve the locally driven orchestration techniques, which are used to place demands on edge devices throughput to ensure efficient and resilient placement decisions. The second part of the thesis proposes using reinforcement learning (RL) techniques on a nodal base to address the dynamic nature of demands within an optical networking paradigm. The RL techniques ensure blocking rates are kept to a minimum by tailoring the agents’ behavior based on each node\u27s demand intake throughout the day. The third part of the thesis proposes using transfer learning augmented reinforcement learning to drive a network slicing-based solution in O-RAN to address the stringent and divergent demands of 5G applications. The main contributions of the thesis consist of three broad parts. The first is developing optimal and heuristic orchestration algorithms that improve demands’ performance and reliability in an edge computing environment. The second is using reinforcement learning to determine the appropriate spectral placement for demands within isolated optical paths, ensuring lower fragmentation and better throughput utilization. The third is developing a heuristic controlled transfer learning augmented reinforcement learning network slicing in an O-RAN environment. Hence, ensuring improved reliability while maintaining lower complexity than traditional placement techniques

    5G and beyond networks

    Get PDF
    This chapter investigates the Network Layer aspects that will characterize the merger of the cellular paradigm and the IoT architectures, in the context of the evolution towards 5G-and-beyond, including some promising emerging services as Unmanned Aerial Vehicles or Base Stations, and V2X communications

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
    • …
    corecore