307 research outputs found

    From statistical- to machine learning-based network traffic prediction

    Get PDF
    Nowadays, due to the exponential and continuous expansion of new paradigms such as Internet of Things (IoT), Internet of Vehicles (IoV) and 6G, the world is witnessing a tremendous and sharp increase of network traffic. In such large-scale, heterogeneous, and complex networks, the volume of transferred data, as big data, is considered a challenge causing different networking inefficiencies. To overcome these challenges, various techniques are introduced to monitor the performance of networks, called Network Traffic Monitoring and Analysis (NTMA). Network Traffic Prediction (NTP) is a significant subfield of NTMA which is mainly focused on predicting the future of network load and its behavior. NTP techniques can generally be realized in two ways, that is, statistical- and Machine Learning (ML)-based. In this paper, we provide a study on existing NTP techniques through reviewing, investigating, and classifying the recent relevant works conducted in this field. Additionally, we discuss the challenges and future directions of NTP showing that how ML and statistical techniques can be used to solve challenges of NTP.publishedVersio

    Machine Learning-Powered Management Architectures for Edge Services in 5G Networks

    Get PDF
    L'abstract Ăš presente nell'allegato / the abstract is in the attachmen

    On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration

    Get PDF
    Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllers.Comment: 15 pages, 15 Figures, accepted for publication at IEEE TV

    On the specialization of FDRL agents for scalable and distributed 6G RAN slicing orchestration

    Get PDF
    ©2022 IEEE. Reprinted, with permission, from Rezazadeh, F., Zanzi, L., Devoti, F. et.al. On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration. IEEE Transactions on vehicular technology (Online) October 2022Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllersPeer ReviewedPreprin

    An Unsupervised and Non-Invasive Model for Predicting Network Resource Demands

    Get PDF
    During the last decade, network providers are faced by a growing problem regarding the distribution of bandwidth and computing resources. Recently, the mobile edge computing paradigm was proposed as a possible solution, mainly in consideration of the provided possibility of transferring service demands at the edge of the network. This solution heavily relies on the dynamic allocation of resources, depending on the user needs and network connection, therefore it becomes essential to correctly predict user movements and activities. This paper proposes an unsupervised methodology to define meaningful user locations from noninvasive user information, captured by the user terminal with no computing or battery overhead. The data is analyzed through a conjoined clustering algorithm to build a stochastic Markov chain to predict the users’ movements and their bandwidth demands. Such a model could be used by network operators to optimize network resources allocation. To evaluate the proposed methodology, we tested it on one of the largest public community’s labeled mobile and sensor dataset, developed by the “CrowdSignals.io” initiative, and we present positive and promising results concerning the prediction capabilities of the model

    The Four-C Framework for High Capacity Ultra-Low Latency in 5G Networks: A Review

    Get PDF
    Network latency will be a critical performance metric for the Fifth Generation (5G) networks expected to be fully rolled out in 2020 through the IMT-2020 project. The multi-user multiple-input multiple-output (MU-MIMO) technology is a key enabler for the 5G massive connectivity criterion, especially from the massive densification perspective. Naturally, it appears that 5G MU-MIMO will face a daunting task to achieve an end-to-end 1 ms ultra-low latency budget if traditional network set-ups criteria are strictly adhered to. Moreover, 5G latency will have added dimensions of scalability and flexibility compared to prior existing deployed technologies. The scalability dimension caters for meeting rapid demand as new applications evolve. While flexibility complements the scalability dimension by investigating novel non-stacked protocol architecture. The goal of this review paper is to deploy ultra-low latency reduction framework for 5G communications considering flexibility and scalability. The Four (4) C framework consisting of cost, complexity, cross-layer and computing is hereby analyzed and discussed. The Four (4) C framework discusses several emerging new technologies of software defined network (SDN), network function virtualization (NFV) and fog networking. This review paper will contribute significantly towards the future implementation of flexible and high capacity ultra-low latency 5G communications

    10381 Summary and Abstracts Collection -- Robust Query Processing

    Get PDF
    Dagstuhl seminar 10381 on robust query processing (held 19.09.10 - 24.09.10) brought together a diverse set of researchers and practitioners with a broad range of expertise for the purpose of fostering discussion and collaboration regarding causes, opportunities, and solutions for achieving robust query processing. The seminar strove to build a unified view across the loosely-coupled system components responsible for the various stages of database query processing. Participants were chosen for their experience with database query processing and, where possible, their prior work in academic research or in product development towards robustness in database query processing. In order to pave the way to motivate, measure, and protect future advances in robust query processing, seminar 10381 focused on developing tests for measuring the robustness of query processing. In these proceedings, we first review the seminar topics, goals, and results, then present abstracts or notes of some of the seminar break-out sessions. We also include, as an appendix, the robust query processing reading list that was collected and distributed to participants before the seminar began, as well as summaries of a few of those papers that were contributed by some participants

    Distributed Cognitive RAT Selection in 5G Heterogeneous Networks: A Machine Learning Approach

    Get PDF
    The leading role of the HetNet (Heterogeneous Networks) strategy as the key Radio Access Network (RAN) architecture for future 5G networks poses serious challenges to the current cell selection mechanisms used in cellular networks. The max-SINR algorithm, although effective historically for performing the most essential networking function of wireless networks, is inefficient at best and obsolete at worst in 5G HetNets. The foreseen embarrassment of riches and diversified propagation characteristics of network attachment points spanning multiple Radio Access Technologies (RAT) requires novel and creative context-aware system designs. The association and routing decisions, in the context of single-RAT or multi-RAT connections, need to be optimized to efficiently exploit the benefits of the architecture. However, the high computational complexity required for multi-parametric optimization of utility functions, the difficulty of modeling and solving Markov Decision Processes, the lack of guarantees of stability of Game Theory algorithms, and the rigidness of simpler methods like Cell Range Expansion and operator policies managed by the Access Network Discovery and Selection Function (ANDSF), makes neither of these state-of-the-art approaches a favorite. This Thesis proposes a framework that relies on Machine Learning techniques at the terminal device-level for Cognitive RAT Selection. The use of cognition allows the terminal device to learn both a multi-parametric state model and effective decision policies, based on the experience of the device itself. This implies that a terminal, after observing its environment during a learning period, may formulate a system characterization and optimize its own association decisions without any external intervention. In our proposal, this is achieved through clustering of appropriately defined feature vectors for building a system state model, supervised classification to obtain the current system state, and reinforcement learning for learning good policies. This Thesis describes the above framework in detail and recommends adaptations based on the experimentation with the X-means, k-Nearest Neighbors, and Q-learning algorithms, the building blocks of the solution. The network performance of the proposed framework is evaluated in a multi-agent environment implemented in MATLAB where it is compared with alternative RAT selection mechanisms
    • 

    corecore