175 research outputs found

    DeepCog: cognitive network management in sliced 5G Networks with deep learning

    Get PDF
    Proceeding of: 2019 IEEE International Conference on Computer Communications (IEEE INFOCOM 2019), Paris (France), 29 April - 2 May, 2019.Network slicing is a new paradigm for future 5G networks where the network infrastructure is divided into slices devoted to different services and customized to their needs. With this paradigm, it is essential to allocate to each slice the needed resources, which requires the ability to forecast their respective demands. To this end, we present DeepCog, a novel data analytics tool for the cognitive management of resources in 5G systems. DeepCog forecasts the capacity needed to accommodate future traffic demands within individual network slices while accounting for the operator’s desired balance between resource overprovisioning (i.e., allocating resources exceeding the demand) and service request violations (i.e., allocating less resources than required). To achieve its objective, DeepCog hinges on a deep learning architecture that is explicitly designed for capacity forecasting. Comparative evaluations with real-world measurement data prove that DeepCog’s tight integration of machine learning into resource orchestration allows for substantial (50% or above) reduction of operating expenses with respect to resource allocation solutions based on state-of-theart mobile traffic predictors. Moreover, we leverage DeepCog to carry out an extensive first analysis of the trade-off between capacity overdimensioning and unserviced demands in adaptive, sliced networks and in presence of real-world traffic.The work of University Carlos III of Madrid was supported by the H2020 5G-MoNArch project (Grant Agreement No. 761445) and the work of NEC Laboratories Europe by the 5GTransformer project (Grant Agreement No. 761536). The work of CNR-IEIIT was partially supported by the ANR CANCAN project (ANR-18-CE25-0011)

    DeepCog: optimizing resource provisioning in network slicing with AI-based capacity forecasting

    Get PDF
    The dynamic management of network resources is both a critical and challenging task in upcoming multi-tenant mobile networks, which requires allocating capacity to individual network slices so as to accommodate future time-varying service demands. Such an anticipatory resource configuration process must be driven by suitable predictors that take into account the monetary cost associated to overprovisioning or underprovisioning of networking capacity, computational power, memory, or storage. Legacy models that aim at forecasting traffic demands fail to capture these key economic aspects of network operation. To close this gap, we present DeepCog, a deep neural network architecture inspired by advances in image processing and trained via a dedicated loss function. Unlike traditional traffic volume predictors, DeepCog returns a cost-aware capacity forecast, which can be directly used by operators to take short- and long-term reallocation decisions that maximize their revenues. Extensive performance evaluations with real-world measurement data collected in a metropolitan-scale operational mobile network demonstrate the effectiveness of our proposed solution, which can reduce resource management costs by over 50% in practical case studies.The work of University Carlos III of Madrid was supported by H2020 5G-TOURS project (grant agreement no. 856950). The work of NEC Laboratories Europe was supported by H2020 5G-TRANSFORMER project (grant agree-ment no. 761536) and 5GROWTH project (grant agreement no. 856709)

    Forecasting for Network Management with Joint Statistical Modelling and Machine Learning

    Get PDF
    Forecasting is a task of ever increasing importance for the operation of mobile networks, where it supports anticipa tory decisions by network intelligence and enables emerging zero touch service and network management models. While current trends in forecasting for anticipatory networking lean towards the systematic adoption of models that are purely based on deep learning approaches, we pave the way for a different strategy to the design of predictors for mobile network environments. Specifically, following recent advances in time series prediction, we consider a hybrid approach that blends statistical modelling and machine learning by means of a joint training process of the two methods. By tailoring this mixed forecasting engine to the specific requirements of network traffic demands, we develop a Thresholded Exponential Smoothing and Recurrent Neural Network (TES-RNN) model. We experiment with TES RNN in two practical network management use cases, i.e., (i) anticipatory allocation of network resources, and (ii) mobile traffic anomaly prediction. Results obtained with extensive traffic workloads collected in an operational mobile network show that TES-RNN can yield substantial performance gains over current state-of-the-art predictors in both applications consideredThis work is partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no.101017109 DAEMON. This work is partially supported by the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU through the UNICO 5G I+D 6GCLARION-OR and AEON-ZERO. The authors would like to thank Dario Bega for his contribution to developing the forecasting use case I, and Slawek Smyl for his feedback on the baseline ES-RNN model

    Mobile Crowd Sensing in Edge Computing Environment

    Get PDF
    abstract: The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing applications such as tracking a missing person or capturing a highlight video of an event. Using snippets and pictures captured from multiple mobile phone cameras with specific contexts can improve the data acquired in such applications. These MCS applications require efficient processing and analysis to generate results in real time. A human user, mobile device and their interactions cause a change in context on the mobile device affecting the quality contextual data that is gathered. Usage of MCS data in real-time mobile applications is challenging due to the complex inter-relationship between: a) availability of context, context is available with the mobile phones and not with the cloud, b) cost of data transfer to remote cloud servers, both in terms of communication time and energy, and c) availability of local computational resources on the mobile phone, computation may lead to rapid battery drain or increased response time. The resource-constrained mobile devices need to offload some of their computation. This thesis proposes ContextAiDe an end-end architecture for data-driven distributed applications aware of human mobile interactions using Edge computing. Edge processing supports real-time applications by reducing communication costs. The goal is to optimize the quality and the cost of acquiring the data using a) modeling and prediction of mobile user contexts, b) efficient strategies of scheduling application tasks on heterogeneous devices including multi-core devices such as GPU c) power-aware scheduling of virtual machine (VM) applications in cloud infrastructure e.g. elastic VMs. ContextAiDe middleware is integrated into the mobile application via Android API. The evaluation consists of overheads and costs analysis in the scenario of ``perpetrator tracking" application on the cloud, fog servers, and mobile devices. LifeMap data sets containing actual sensor data traces from mobile devices are used to simulate the application run for large scale evaluation.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis

    On the specialization of FDRL agents for scalable and distributed 6G RAN slicing orchestration

    Get PDF
    ©2022 IEEE. Reprinted, with permission, from Rezazadeh, F., Zanzi, L., Devoti, F. et.al. On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration. IEEE Transactions on vehicular technology (Online) October 2022Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllersPeer ReviewedPreprin

    On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration

    Get PDF
    Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllers.Comment: 15 pages, 15 Figures, accepted for publication at IEEE TV

    An Intelligent model for supporting Edge Migration for Virtual Function Chains in Next Generation Internet of Things

    Get PDF
    The developments on next generation IoT sensing devices, with the advances on their low power computational capabilities and high speed networking has led to the introduction of the edge computing paradigm. Within an edge cloud environment, services may generate and consume data locally, without involving cloud computing infrastructures. Aiming to tackle the low computational resources of the IoT nodes, Virtual-Function-Chain has been proposed as an intelligent distribution model for exploiting the maximum of the computational power at the edge, thus enabling the support of demanding services. An intelligent migration model with the capacity to support Virtual-Function-Chains is introduced in this work. According to this model, migration at the edge can support individual features of a Virtual-Function-Chain. First, auto-healing can be implemented with cold migrations, if a Virtual Function fails unexpectedly. Second, a Quality of Service monitoring model can trigger live migrations, aiming to avoid edge devices overload. The evaluation studies of the proposed model revealed that it has the capacity to increase the robustness of an edge-based service on low-powered IoT devices. Finally, comparison with similar frameworks, like Kubernetes, showed that the migration model can effectively react on edge network fluctuations

    White Paper for Research Beyond 5G

    Get PDF
    The documents considers both research in the scope of evolutions of the 5G systems (for the period around 2025) and some alternative/longer term views (with later outcomes, or leading to substantial different design choices). This document reflects on four main system areas: fundamental theory and technology, radio and spectrum management; system design; and alternative concepts. The result of this exercise can be broken in two different strands: one focused in the evolution of technologies that are already ongoing development for 5G systems, but that will remain research areas in the future (with “more challenging” requirements and specifications); the other, highlighting technologies that are not really considered for deployment today, or that will be essential for addressing problems that are currently non-existing, but will become apparent when 5G systems begin their widespread deployment

    Deep Learning for Network Traffic Monitoring and Analysis (NTMA): A Survey

    Get PDF
    Modern communication systems and networks, e.g., Internet of Things (IoT) and cellular networks, generate a massive and heterogeneous amount of traffic data. In such networks, the traditional network management techniques for monitoring and data analytics face some challenges and issues, e.g., accuracy, and effective processing of big data in a real-time fashion. Moreover, the pattern of network traffic, especially in cellular networks, shows very complex behavior because of various factors, such as device mobility and network heterogeneity. Deep learning has been efficiently employed to facilitate analytics and knowledge discovery in big data systems to recognize hidden and complex patterns. Motivated by these successes, researchers in the field of networking apply deep learning models for Network Traffic Monitoring and Analysis (NTMA) applications, e.g., traffic classification and prediction. This paper provides a comprehensive review on applications of deep learning in NTMA. We first provide fundamental background relevant to our review. Then, we give an insight into the confluence of deep learning and NTMA, and review deep learning techniques proposed for NTMA applications. Finally, we discuss key challenges, open issues, and future research directions for using deep learning in NTMA applications.publishedVersio
    • 

    corecore