103 research outputs found

    Intelligent Advancements in Location Management and C-RAN Power-Aware Resource Allocation

    Get PDF
    The evolving of cellular networks within the last decade continues to focus on delivering a robust and reliable means to cope with the increasing number of users and demanded capacity. Recent advancements of cellular networks such as Long-Term Evolution (LTE) and LTE-advanced offer a remarkable high bandwidth connectivity delivered to the users. Signalling overhead is one of the vital issues that impact the cellular behavior. Causing a significant load in the core network hence effecting the cellular network reliability. Moreover, the signaling overhead decreases the Quality of Experience (QoE) of users. The first topic of the thesis attempts to reduce the signaling overhead by developing intelligent location management techniques that minimize paging and Tracking Area Update (TAU) signals. Consequently, the corresponding optimization problems are formulated. Furthermore, several techniques and heuristic algorithms are implemented to solve the formulated problems. Additionally, network scalability has become a challenging aspect that has been hindered by the current network architecture. As a result, Cloud Radio Access Networks (C-RANs) have been introduced as a new trend in wireless technologies to address this challenge. C-RAN architecture consists of: Remote Radio Head (RRH), Baseband Unit (BBU), and the optical network connecting them. However, RRH-to-BBU resource allocation can cause a significant downgrade in efficiency, particularly the allocation of the computational resources in the BBU pool to densely deployed small cells. This causes a vast increase in the power consumption and wasteful resources. Therefore, the second topic of the thesis discusses C-RAN infrastructure, particularly where a pool of BBUs are gathered to process the computational resources. We argue that there is a need of optimizing the processing capacity in order to minimize the power consumption and increase the overall system efficiency. Consequently, the optimal allocation of computational resources between the RRHs and BBUs is modeled. Furthermore, in order to get an optimal RRH-to-BBU allocation, it is essential to have an optimal physical resource allocation for users to determine the required computational resources. For this purpose, an optimization problem that models the assignment of resources at these two levels (from physical resources to users and from RRHs to BBUs) is formulated

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    Learning-based tracking area list management in 4G and 5G networks

    Get PDF
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksMobility management in 5G networks is a very challenging issue. It requires novel ideas and improved management so that signaling is kept minimized and far from congesting the network. Mobile networks have become massive generators of data and in the forthcoming years this data is expected to increase drastically. The use of intelligence and analytics based on big data is a good ally for operators to enhance operational efficiency and provide individualized services. This work proposes to exploit User Equipment (UE) patterns and hidden relationships from geo-spatial time series to minimize signaling due to idle mode mobility. We propose a holistic methodology to generate optimized Tracking Area Lists (TALs) in a per UE manner, considering its learned individual behavior. The k -means algorithm is proposed to find the allocation of cells into tracking areas. This is used as a basis for the TALs optimization itself, which follows a combined multi-objective and single-objective approach depending on the UE behavior. The last stage identifies UE profiles and performs the allocation of the TAL by using a neural network. The goodness of each technique has been evaluated individually and jointly under very realistic conditions and different situations. Results demonstrate important signaling reductions and good sensitivity to changing conditions.This work was supported by the Spanish National Science Council and ERFD funds under projects TEC2014-60258-C2-2-R and RTI2018-099880-B-C32.Peer ReviewedPostprint (author's final draft

    User-oriented mobility management in cellular wireless networks

    Get PDF
    2020 Spring.Includes bibliographical references.Mobility Management (MM) in wireless mobile networks is a vital process to keep an individual User Equipment (UE) connected while moving within the network coverage area—this is required to keep the network informed about the UE's mobility (i.e., location changes). The network must identify the exact serving cell of a specific UE for the purpose of data-packet delivery. The two MM procedures that are necessary to localize a specific UE and deliver data packets to that UE are known as Tracking Area Update (TAU) and Paging, which are burdensome not only to the network resources but also UE's battery—the UE and network always initiate the TAU and Paging, respectively. These two procedures are used in current Long Term Evolution (LTE) and its next generation (5G) networks despite the drawback that it consumes bandwidth and energy. Because of potentially very high-volume traffic and increasing density of high-mobility UEs, the TAU/Paging procedure incurs significant costs in terms of the signaling overhead and the power consumption in the battery-limited UE. This problem will become even worse in 5G, which is expected to accommodate exceptional services, such as supporting mission-critical systems (close-to-zero latency) and extending battery lifetime (10 times longer). This dissertation examines and discusses a variety of solution schemes for both the TAU and Paging, emphasizing a new key design to accommodate 5G use cases. However, ongoing efforts are still developing new schemes to provide seamless connections to the ever-increasing density of high-mobility UEs. In this context and toward achieving 5G use cases, we propose a novel solution to solve the MM issues, named gNB-based UE Mobility Tracking (gNB-based UeMT). This solution has four features aligned with achieving 5G goals. First, the mobile UE will no longer trigger the TAU to report their location changes, giving much more power savings with no signaling overhead. Instead, second, the network elements, gNBs, take over the responsibility of Tracking and Locating these UE, giving always-known UE locations. Third, our Paging procedure is markedly improved over the conventional one, providing very fast UE reachability with no Paging messages being sent simultaneously. Fourth, our solution guarantees lightweight signaling overhead with very low Paging delay; our simulation studies show that it achieves about 92% reduction in the corresponding signaling overhead. To realize these four features, this solution adds no implementation complexity. Instead, it exploits the already existing LTE/5G communication protocols, functions, and measurement reports. Our gNB-based UeMT solution by design has the potential to deal with mission-critical applications. In this context, we introduce a new approach for mission-critical and public-safety communications. Our approach aims at emergency situations (e.g., natural disasters) in which the mobile wireless network becomes dysfunctional, partially or completely. Specifically, this approach is intended to provide swift network recovery for Search-and-Rescue Operations (SAROs) to search for survivors after large-scale disasters, which we call UE-based SAROs. These SAROs are based on the fact that increasingly almost everyone carries wireless mobile devices (UEs), which serve as human-based wireless sensors on the ground. Our UE-based SAROs are aimed at accounting for limited UE battery power while providing critical information to first responders, as follows: 1) generate immediate crisis maps for the disaster-impacted areas, 2) provide vital information about where the majority of survivors are clustered/crowded, and 3) prioritize the impacted areas to identify regions that urgently need communication coverage. UE-based SAROs offer first responders a vital tool to prioritize and manage SAROs efficiently and effectively in a timely manner

    Toward a Live BBU Container Migration in Wireless Networks

    Get PDF
    Cloud Radio Access Networks (Cloud-RANs) have recently emerged as a promising architecture to meet the increasing demands and expectations of future wireless networks. Such an architecture can enable dynamic and flexible network operations to address significant challenges, such as higher mobile traffic volumes and increasing network operation costs. However, the implementation of compute-intensive signal processing Network Functions (NFs) on the General Purpose Processors (General Purpose Processors) that are typically found in data centers could lead to performance complications, such as in the case of overloaded servers. There is therefore a need for methods that ensure the availability and continuity of critical wireless network functionality in such circumstances. Motivated by the goal of providing highly available and fault-tolerant functionality in Cloud-RAN-based networks, this paper proposes the design, specification, and implementation of live migration of containerized Baseband Units (BBUs) in two wireless network settings, namely Long Range Wide Area Network (LoRaWAN) and Long Term Evolution (LTE) networks. Driven by the requirements and critical challenges of live migration, the approach shows that in the case of LoRaWAN networks, the migration of BBUs is currently possible with relatively low downtimes to support network continuity. The analysis and comparison of the performance of functional splits and cell configurations in both networks were performed in terms of fronthaul throughput requirements. The results obtained from such an analysis can be used by both service providers and network operators in the deployment and optimization of Cloud-RANs services, in order to ensure network reliability and continuity in cloud environments

    Landscape of IoT security

    Full text link
    The last two decades have experienced a steady rise in the production and deployment of sensing-and-connectivity-enabled electronic devices, replacing “regular” physical objects. The resulting Internet-of-Things (IoT) will soon become indispensable for many application domains. Smart objects are continuously being integrated within factories, cities, buildings, health institutions, and private homes. Approximately 30 years after the birth of IoT, society is confronted with significant challenges regarding IoT security. Due to the interconnectivity and ubiquitous use of IoT devices, cyberattacks have widespread impacts on multiple stakeholders. Past events show that the IoT domain holds various vulnerabilities, exploited to generate physical, economic, and health damage. Despite many of these threats, manufacturers struggle to secure IoT devices properly. Thus, this work overviews the IoT security landscape with the intention to emphasize the demand for secured IoT-related products and applications. Therefore, (a) a list of key challenges of securing IoT devices is determined by examining their particular characteristics, (b) major security objectives for secured IoT systems are defined, (c) a threat taxonomy is introduced, which outlines potential security gaps prevalent in current IoT systems, and (d) key countermeasures against the aforementioned threats are summarized for selected IoT security-related technologies available on the market

    Pilvipohjaisen radioliityntäverkon kustannusten mallintaminen

    Get PDF
    The rapid growth of mobile data traffic is challenging the current way of building and operating the current radio access network. Cloud-based radio access network is researched as a solution to provide the required capacity for rapidly growing traffic demand in more economical manner. Scope of this thesis is to evaluate the costs of different existing and future radio access network architectures depending on the given network and traffic scenario. This is done by creating a cost model based on expert interviews to solve the most economical solution for the given network in terms of total cost of ownership. The results show that the cloud-based radio access network’s cost benefits are dependent on the expected traffic growth. In the low traffic growth scenario, the cost benefits of cloud-based radio access network are questionable, but in the high traffic growth scenario clear cost benefits are achieved.Mobiilidataliikenteen nopea kasvu haastaa nykyisen tavan rakentaa ja hallinnoida tämän hetkisiä radioliityntäverkkoja. Pilvipohjaista radioliityntäverkkoa tutkitaan ratkaisuksi tarjota tarvittavaa verkkokapasiteettia entistä taloudellisemmin. Tämän opinnäytetyön tarkoituksena on arvioida nykyisten ja pilvipohjaisten radioliityntäverkkoarkkitehtuurien kustannuksia riippuen verkon rakenteesta ja liikenteestä. Tämä toteutetaan luomalla kustannusmalli, joka perustuu asiantuntijoiden haastatteluihin. Mallin avulla on mahdollista vertailla annetun verkon eri arkkitehtuurien kokonaiskustannuksia ja selvittää taloudellisin radioliityntäverkkoarkkitehtuuri verkolle. Mallinnuksen tulokset osoittavat, että pilvipohjaisen radioliityntäverkon taloudelliset hyödyt ovat riippuvaisia dataliikenteen kasvusta verkossa. Vähäisellä data-liikenteen kasvulla pilvipohjaisen radioliityntäverkon kustannusedut ovat kyseenalaiset, mutta suurella dataliikenteen kasvulla saadaan selviä säästöjä verrattuna nykyisiin arkkitehtuureihin

    Cloud Radio Access Network architecture. Towards 5G mobile networks

    Get PDF
    corecore