25 research outputs found

    Leveraging intelligence from network CDR data for interference aware energy consumption minimization

    Get PDF
    Cell densification is being perceived as the panacea for the imminent capacity crunch. However, high aggregated energy consumption and increased inter-cell interference (ICI) caused by densification, remain the two long-standing problems. We propose a novel network orchestration solution for simultaneously minimizing energy consumption and ICI in ultra-dense 5G networks. The proposed solution builds on a big data analysis of over 10 million CDRs from a real network that shows there exists strong spatio-temporal predictability in real network traffic patterns. Leveraging this we develop a novel scheme to pro-actively schedule radio resources and small cell sleep cycles yielding substantial energy savings and reduced ICI, without compromising the users QoS. This scheme is derived by formulating a joint Energy Consumption and ICI minimization problem and solving it through a combination of linear binary integer programming, and progressive analysis based heuristic algorithm. Evaluations using: 1) a HetNet deployment designed for Milan city where big data analytics are used on real CDRs data from the Telecom Italia network to model traffic patterns, 2) NS-3 based Monte-Carlo simulations with synthetic Poisson traffic show that, compared to full frequency reuse and always on approach, in best case, proposed scheme can reduce energy consumption in HetNets to 1/8th while providing same or better Qo

    Network Flow Optimization Using Reinforcement Learning

    Get PDF

    A PARADIGM SHIFTING APPROACH IN SON FOR FUTURE CELLULAR NETWORKS

    Get PDF
    The race to next generation cellular networks is on with a general consensus in academia and industry that massive densification orchestrated by self-organizing networks (SONs) is the cost-effective solution to the impending mobile capacity crunch. While the research on SON commenced a decade ago and is still ongoing, the current form (i.e., the reactive mode of operation, conflict-prone design, limited degree of freedom and lack of intelligence) hinders the current SON paradigm from meeting the requirements of 5G. The ambitious quality of experience (QoE) requirements and the emerging multifarious vision of 5G, along with the associated scale of complexity and cost, demand a significantly different, if not totally new, approach to SONs in order to make 5G technically as well as financially feasible. This dissertation addresses these limitations of state-of-the-art SONs. It first presents a generic low-complexity optimization framework to allow for the agile, on-line, multi-objective optimization of future mobile cellular networks (MCNs) through only top-level policy input that prioritizes otherwise conflicting key performance indicators (KPIs) such as capacity, QoE, and power consumption. The hybrid, semi-analytical approach can be used for a wide range of cellular optimization scenarios with low complexity. The dissertation then presents two novel, user-mobility, prediction-based, proactive self-optimization frameworks (AURORA and OPERA) to transform mobility from a challenge into an advantage. The proposed frameworks leverage mobility to overcome the inherent reactiveness of state-of-the-art self-optimization schemes to meet the extremely low latency and high QoE expected from future cellular networks vis-à-vis 5G and beyond. The proactiveness stems from the proposed frameworks’ novel capability of utilizing past hand-over (HO) traces to determine future cell loads instead of observing changes in cell loads passively and then reacting to them. A semi-Markov renewal process is leveraged to build a model that can predict the cell of the next HO and the time of the HO for the users. A low-complexity algorithm has been developed to transform the predicted mobility attributes to a user-coordinate level resolution. The learned knowledge base is used to predict the user distribution among cells. This prediction is then used to formulate a novel (i) proactive energy saving (ES) optimization problem (AURORA) that proactively schedules cell sleep cycles and (ii) proactive load balancing (LB) optimization problem (OPERA). The proposed frameworks also incorporate the effect of cell individual offset (CIO) for balancing the load among cells, and they thus exploit an additional ultra-dense network (UDN)-specific mechanism to ensure QoE while maximizing ES and/or LB. The frameworks also incorporates capacity and coverage constraints and a load-aware association strategy for ensuring the conflict-free operation of ES, LB, and coverage and capacity optimization (CCO) SON functions. Although the resulting optimization problems are combinatorial and NP-hard, proactive prediction of cell loads instead of reactive measurement allows ample time for combination of heuristics such as genetic programming and pattern search to find solutions with high ES and LB yields compared to the state of the art. To address the challenge of significantly higher cell outage rates in anticipated in 5G and beyond due to higher operational complexity and cell density than legacy networks, the dissertation’s fourth key contribution is a stochastic analytical model to analyze the effects of the arrival of faults on the reliability behavior of a cellular network. Assuming exponential distributions for failures and recovery, a reliability model is developed using the continuous-time Markov chains (CTMC) process. Unlike previous studies on network reliability, the proposed model is not limited to structural aspects of base stations (BSs), and it takes into account diverse potential fault scenarios; it is also capable of predicting the expected time of the first occurrence of the fault and the long-term reliability behavior of the BS. The contributions of this dissertation mark a paradigm shift from the reactive, semi-manual, sub-optimal SON towards a conflict-free, agile, proactive SON. By paving the way for future MCN’s commercial and technical viability, the new SON paradigm presented in this dissertation can act as a key enabler for next-generation MCNs

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    From MANET to people-centric networking: Milestones and open research challenges

    Get PDF
    In this paper, we discuss the state of the art of (mobile) multi-hop ad hoc networking with the aim to present the current status of the research activities and identify the consolidated research areas, with limited research opportunities, and the hot and emerging research areas for which further research is required. We start by briefly discussing the MANET paradigm, and why the research on MANET protocols is now a cold research topic. Then we analyze the active research areas. Specifically, after discussing the wireless-network technologies, we analyze four successful ad hoc networking paradigms, mesh networks, opportunistic networks, vehicular networks, and sensor networks that emerged from the MANET world. We also present an emerging research direction in the multi-hop ad hoc networking field: people centric networking, triggered by the increasing penetration of the smartphones in everyday life, which is generating a people-centric revolution in computing and communications

    Mobility management in multi-RAT multiI-band heterogeneous networks

    Get PDF
    Support for user mobility is the raison d'etre of mobile cellular networks. However, mounting pressure for more capacity is leading to adaption of multi-band multi-RAT ultra-dense network design, particularly with the increased use of mmWave based small cells. While such design for emerging cellular networks is expected to offer manyfold more capacity, it gives rise to a new set of challenges in user mobility management. Among others, frequent handovers (HO) and thus higher impact of poor mobility management on quality of user experience (QoE) as well as link capacity, lack of an intelligent solution to manage dual connectivity (of user with both 4G and 5G cells) activation/deactivation, and mmWave cell discovery are the most critical challenges. In this dissertation, I propose and evaluate a set of solutions to address the aforementioned challenges. The beginning outcome of our investigations into the aforementioned problems is the first ever taxonomy of mobility related 3GPP defined network parameters and Key Performance Indicators (KPIs) followed by a tutorial on 3GPP-based 5G mobility management procedures. The first major contribution of the thesis here is a novel framework to characterize the relationship between the 28 critical mobility-related network parameters and 8 most vital KPIs. A critical hurdle in addressing all mobility related challenges in emerging networks is the complexity of modeling realistic mobility and HO process. Mathematical models are not suitable here as they cannot capture the dynamics as well as the myriad parameters and KPIs involved. Existing simulators also mostly either omit or overly abstract the HO and user mobility, chiefly because the problems caused by poor HO management had relatively less impact on overall performance in legacy networks as they were not multi-RAT multi-band and therefore incurred much smaller number of HOs compared to emerging networks. The second key contribution of this dissertation is development of a first of its kind system level simulator, called SyntheticNET that can help the research community in overcoming the hurdle of realistic mobility and HO process modeling. SyntheticNET is the very first python-based simulator that fully conforms to 3GPP Release 15 5G standard. Compared to the existing simulators, SyntheticNET includes a modular structure, flexible propagation modeling, adaptive numerology, realistic mobility patterns, and detailed HO evaluation criteria. SyntheticNET’s python-based platform allows the effective application of Artificial Intelligence (AI) to various network functionalities. Another key challenge in emerging multi-RAT technologies is the lack of an intelligent solution to manage dual connectivity with 4G as well 5G cell needed by a user to access 5G infrastructure. The 3rd contribution of this thesis is a solution to address this challenge. I present a QoE-aware E-UTRAN New Radio-Dual Connectivity (EN-DC) activation scheme where AI is leveraged to develop a model that can accurately predict radio link failure (RLF) and voice muting using the low-level measurements collected from a real network. The insights from the AI based RLF and mute prediction models are then leveraged to configure sets of 3GPP parameters to maximize EN-DC activation while keeping the QoE-affecting RLF and mute anomalies to minimum. The last contribution of this dissertation is a novel solution to address mmWave cell discovery problem. This problem stems from the highly directional nature of mmWave transmission. The proposed mmWave cell discovery scheme builds upon a joint search method where mmWave cells exploit an overlay coverage layer from macro cells sharing the UE location to the mmWave cell. The proposed scheme is made more practical by investigating and developing solutions for the data sparsity issue in model training. Ability to work with sparse data makes the proposed scheme feasible in realistic scenarios where user density is often not high enough to provide coverage reports from each bin of the coverage area. Simulation results show that the proposed scheme, efficiently activates EN-DC to a nearby mmWave 5G cell and thus substantially reduces the mmWave cell discovery failures compared to the state of the art cell discovery methods

    Low-latency Networking: Where Latency Lurks and How to Tame It

    Full text link
    While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub-milliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of sub-milliseconds latency. In this article, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent trade-offs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research

    Channel Access Management for Massive Cellular IoT Applications

    Get PDF
    As part of the steps taken towards improving the quality of life, many of everyday life activities as well as technological advancements are relying more and more on smart devices. In the future, it is expected that every electric device will be a smart device that can be connected to the internet. This gives rise to the new network paradigm known as the massive cellular IoT, where a large number of simple battery powered heterogeneous devices are collectively working for the betterment of humanity in all aspects. However, different from the traditional cellular based communication networks, IoT applications produce uplink-heavy data traffic that is composed of a large number of small data packets with different quality of service (QoS) requirements. These unique characteristics pose as a challenge to the current cellular channel access process and, hence, new and revolutionary access mechanisms are much needed. These access mechanisms need to be cost-effective, enable the support of massive number of devices, scalable, practical, and energy and radio resource efficient. Furthermore, due to the low computational capabilities of the devices, they cannot handle heavy networking intelligence and, thus, the designed channel access should be simple and light. Accordingly, in this research, we evaluate the suitability of the current channel access mechanism for massive applications and propose an energy efficient and resource preserving clustering and data aggregation solution. The proposed solution is tailored to the needs of future IoT applications. First, we recognize that for many anticipated cellular IoT applications, providing energy efficient and delay-aware access is crucial. However, in cellular networks, before devices transmit their data, they use a contention-based association protocol, known as random access channel procedure (RACH), which introduces extensive access delays and energy wastage as the number of contending devices increases. Modeling the performance of the RACH protocol is a challenging task due to the complexity of uplink transmission that exhibits a wide range of interference components; nonetheless, it is an essential process that helps determine the applicability of cellular IoT communication paradigm and shed light on the main challenges. Consequently, we develop a novel mathematical framework based on stochastic geometry to evaluate the RACH protocol and identify its limitations in the context of cellular IoT applications with a massive number of devices. To do so, we study the traditional cellular association process and establish a mathematical model for its association success probability. The model accounts for device density, spatial characteristics of the network, power control employed, and mutual interference among the devices. Our analysis and results highlight the shortcomings of the RACH protocol and give insights into the potentials brought on by employing power control techniques. Second, based on the analysis of the RACH procedure, we determine that, as the number of devices increases, the contention over the limited network radio resources increases, leading to network congestion. Accordingly, to avoid network congestion while supporting a large number of devices, we propose to use node clustering and data aggregation. As the number of supported devices increases and their QoS requirements become vast, optimizing node clustering and data aggregation processes becomes critical to be able to handle the many trade-offs that arise among different network performance metrics. Furthermore, for cost effectiveness, we propose that the data aggregator nodes be cellular devices and thus it is desirable to keep the number of aggregators to minimum such that we avoid congesting the RACH channel, while maximizing the number of successfully supported devices. Consequently, to tackle these issues, we explore the possibility of combining data aggregation and non-orthogonal multiple access (NOMA) where we propose a novel two-hop NOMA-enabled network architecture. Concepts from queuing theory and stochastic geometry are jointly exploited to derive mathematical expressions for different network performance metrics such as coverage probability, two-hop access delay, and the number of served devices per transmission frame. The established models characterize relations among various network metrics, and hence facilitate the design of two-stage transmission architecture. Numerical results demonstrate that the proposed solution improves the overall access delay and energy efficiency as compared to traditional OMA-based clustered networks. Last, we recognize that under the proposed two-hop network architecture, devices are subject to access point association decisions, i.e., to which access point a device associates plays a major role in determining the overall network performance and the perceived service by the devices. Accordingly, in the third part of the work, we consider the optimization of the two-hop network from the point of view of user association such that the number of QoS satisfied devices is maximized while minimizing the overall device energy consumption. We formulate the problem as a joint access point association, resources utilization, and energy efficient communication optimization problem that takes into account various networking factors such as the number of devices, number of data aggregators, number of available resource units, interference, transmission power limitation of the devices, aggregator transmission performance, and channel conditions. The objective is to show the usefulness of data aggregation and shed light on the importance of network design when the number of devices is massive. We propose a coalition game theory based algorithm, PAUSE, to transform the optimization problem into a simpler form that can be successfully solved in polynomial time. Different network scenarios are simulated to showcase the effectiveness of PAUSE and to draw observations on cost effective data aggregation enabled two-hop network design
    corecore