67 research outputs found

    On the Impact of IoT Traffic on the Cellular EPC

    Get PDF
    One of the most disruptive innovations in next- generation cellular networks will be the massive support of Machine Type and IoT (MTC/IoT) communications. This type of communications exhibits very different requirements from traditional cellular traffic: in MTC/IoT, the same base station may need to provide service to thousands of nodes, each of them transmitting small and infrequent data. In this context, it is critical to evaluate the impact of MTC/IoT on the Evolved Packet Core (EPC) network. We do so by quantifying analytically the signaling load on the EPC due to MTC/IoT bearer instantiation in both standard and 3GPP IoT-optimized LTE networks. Our analysis, validated via simulation, provides useful insights on the impact of the traffic load on each component of the EPC, as well as on the system design.This work was supported by the European Commission through the H2020 5G-TRANSFORMER project (Project ID 761536)

    Long Term Evolution-Advanced and Future Machine-to-Machine Communication

    Get PDF
    Long Term Evolution (LTE) has adopted Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier Frequency Division Multiple Access (SC-FDMA) as the downlink and uplink transmission schemes respectively. Quality of Service (QoS) provisioning is one of the primary objectives of wireless network operators. In LTE-Advanced (LTE-A), several additional new features such as Carrier Aggregation (CA) and Relay Nodes (RNs) have been introduced by the 3rd Generation Partnership Project (3GPP). These features have been designed to deal with the ever increasing demands for higher data rates and spectral efficiency. The RN is a low power and low cost device designed for extending the coverage and enhancing spectral efficiency, especially at the cell edge. Wireless networks are facing a new challenge emerging on the horizon, the expected surge of the Machine-to-Machine (M2M) traffic in cellular and mobile networks. The costs and sizes of the M2M devices with integrated sensors, network interfaces and enhanced power capabilities have decreased significantly in recent years. Therefore, it is anticipated that M2M devices might outnumber conventional mobile devices in the near future. 3GPP standards like LTE-A have primarily been developed for broadband data services with mobility support. However, M2M applications are mostly based on narrowband traffic. These standards may not achieve overall spectrum and cost efficiency if they are utilized for serving the M2M applications. The main goal of this thesis is to take the advantage of the low cost, low power and small size of RNs for integrating M2M traffic into LTE-A networks. A new RN design is presented for aggregating and multiplexing M2M traffic at the RN before transmission over the air interface (Un interface) to the base station called eNodeB. The data packets of the M2M devices are sent to the RN over the Uu interface. Packets from different devices are aggregated at the Packet Data Convergence Protocol (PDCP) layer of the Donor eNodeB (DeNB) into a single large IP packet instead of several small IP packets. Therefore, the amount of overhead data can be significantly reduced. The proposed concept has been developed in the LTE-A network simulator to illustrate the benefits and advantages of the M2M traffic aggregation and multiplexing at the RN. The potential gains of RNs such as coverage enhancement, multiplexing gain, end-to-end delay performance etc. are illustrated with help of simulation results. The results indicate that the proposed concept improves the performance of the LTE-A network with M2M traffic. The adverse impact of M2M traffic on regular LTE-A traffic such as voice and file transfer is minimized. Furthermore, the cell edge throughput and QoS performance are enhanced. Moreover, the results are validated with the help of an analytical model

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    Characterizing Delay and Control Traffic of the Cellular MME with IoT Support

    Get PDF
    One of the main use cases for advanced cellular networks is represented by massive Internet-of-things (MIoT), i.e., an enormous number of IoT devices that transmit data toward the cellular network infrastructure. To make cellular MIoT a reality, data transfer and control procedures specifically designed for the support of IoT are needed. For this reason, 3GPP has introduced the Control Plane Cellular IoT optimization, which foresees a simplified bearer instantiation, with the Mobility Management Entity (MME) handling both control and data traffic. The performance of the MME has therefore become critical, and properly scaling its computational capability can determine the ability of the whole network to tackle MIoT effectively. In particular, considering virtualized networks and the need for an efficient allocation of computing resources, it is paramount to characterize the MME performance as the MIoT traffic load changes. We address this need by presenting compact, closed-form expressions linking the number of IoT sources with the rate at which bearers are requested, and such a rate with the delay incurred by the IoT data. We show that our analysis, supported by testbed experiments and verified through large-scale simulations, represents a valuable tool to make effective scaling decisions in virtualized cellular core networks

    Doctor of Philosophy

    Get PDF
    dissertationThe next generation mobile network (i.e., 5G network) is expected to host emerging use cases that have a wide range of requirements; from Internet of Things (IoT) devices that prefer low-overhead and scalable network to remote machine operation or remote healthcare services that require reliable end-to-end communications. Improving scalability and reliability is among the most important challenges of designing the next generation mobile architecture. The current (4G) mobile core network heavily relies on hardware-based proprietary components. The core networks are expensive and therefore are available in limited locations in the country. This leads to a high end-to-end latency due to the long latency between base stations and the mobile core, and limitations in having innovations and an evolvable network. Moreover, at the protocol level the current mobile network architecture was designed for a limited number of smart-phones streaming a large amount of high quality traffic but not a massive number of low-capability devices sending small and sporadic traffic. This results in high-overhead control and data planes in the mobile core network that are not suitable for a massive number of future Internet-of-Things (IoT) devices. In terms of reliability, network operators already deployed multiple monitoring sys- tems to detect service disruptions and fix problems when they occur. However, detecting all service disruptions is challenging. First, there is a complex relationship between the network status and user-perceived service experience. Second, service disruptions could happen because of reasons that are beyond the network itself. With technology advancements in Software-defined Network (SDN) and Network Func- tion Virtualization (NFV), the next generation mobile network is expected to be NFV-based and deployed on NFV platforms. However, in contrast to telecom-grade hardware with built-in redundancy, commodity off-the-shell (COTS) hardware in NFV platforms often can't be comparable in term of reliability. Availability of Telecom-grade mobile core network hardwares is typically 99.999% (i.e., "five-9s" availability) while most NFV platforms only guarantee "three-9s" availability - orders of magnitude less reliable. Therefore, an NFV-based mobile core network needs extra mechanisms to guarantee its availability. This Ph.D. dissertation focuses on using SDN/NFV, data analytics and distributed system techniques to enhance scalability and reliability of the next generation mobile core network. The dissertation makes the following contributions. First, it presents SMORE, a practical offloading architecture that reduces end-to-end latency and enables new functionalities in mobile networks. It then presents SIMECA, a light-weight and scalable mobile core network designed for a massive number of future IoT devices. Second, it presents ABSENCE, a passive service monitoring system using customer usage and data analytics to detect silent failures in an operational mobile network. Lastly, it presents ECHO, a distributed mobile core network architecture to improve availability of NFV-based mobile core network in public clouds

    Infrastructure sharing of 5G mobile core networks on an SDN/NFV platform

    Get PDF
    When looking towards the deployment of 5G network architectures, mobile network operators will continue to face many challenges. The number of customers is approaching maximum market penetration, the number of devices per customer is increasing, and the number of non-human operated devices estimated to approach towards the tens of billions, network operators have a formidable task ahead of them. The proliferation of cloud computing techniques has created a multitude of applications for network services deployments, and at the forefront is the adoption of Software-Defined Networking (SDN) and Network Functions Virtualisation (NFV). Mobile network operators (MNO) have the opportunity to leverage these technologies so that they can enable the delivery of traditional networking functionality in cloud environments. The benefit of this is reductions seen in the capital and operational expenditures of network infrastructure. When going for NFV, how a Virtualised Network Function (VNF) is designed, implemented, and placed over physical infrastructure can play a vital role on the performance metrics achieved by the network function. Not paying careful attention to this aspect could lead to the drastically reduced performance of network functions thus defeating the purpose of going for virtualisation solutions. The success of mobile network operators in the 5G arena will depend heavily on their ability to shift from their old operational models and embrace new technologies, design principles and innovation in both the business and technical aspects of the environment. The primary goal of this thesis is to design, implement and evaluate the viability of data centre and cloud network infrastructure sharing use case. More specifically, the core question addressed by this thesis is how virtualisation of network functions in a shared infrastructure environment can be achieved without adverse performance degradation. 5G should be operational with high penetration beyond the year 2020 with data traffic rates increasing exponentially and the number of connected devices expected to surpass tens of billions. Requirements for 5G mobile networks include higher flexibility, scalability, cost effectiveness and energy efficiency. Towards these goals, Software Defined Networking (SDN) and Network Functions Virtualisation have been adopted in recent proposals for future mobile networks architectures because they are considered critical technologies for 5G. A Shared Infrastructure Management Framework was designed and implemented for this purpose. This framework was further enhanced for performance optimisation of network functions and underlying physical infrastructure. The objective achieved was the identification of requirements for the design and development of an experimental testbed for future 5G mobile networks. This testbed deploys high performance virtualised network functions (VNFs) while catering for the infrastructure sharing use case of multiple network operators. The management and orchestration of the VNFs allow for automation, scalability, fault recovery, and security to be evaluated. The testbed developed is readily re-creatable and based on open-source software

    Pilvipohjaisen radioliityntäverkon kustannusten mallintaminen

    Get PDF
    The rapid growth of mobile data traffic is challenging the current way of building and operating the current radio access network. Cloud-based radio access network is researched as a solution to provide the required capacity for rapidly growing traffic demand in more economical manner. Scope of this thesis is to evaluate the costs of different existing and future radio access network architectures depending on the given network and traffic scenario. This is done by creating a cost model based on expert interviews to solve the most economical solution for the given network in terms of total cost of ownership. The results show that the cloud-based radio access network’s cost benefits are dependent on the expected traffic growth. In the low traffic growth scenario, the cost benefits of cloud-based radio access network are questionable, but in the high traffic growth scenario clear cost benefits are achieved.Mobiilidataliikenteen nopea kasvu haastaa nykyisen tavan rakentaa ja hallinnoida tämän hetkisiä radioliityntäverkkoja. Pilvipohjaista radioliityntäverkkoa tutkitaan ratkaisuksi tarjota tarvittavaa verkkokapasiteettia entistä taloudellisemmin. Tämän opinnäytetyön tarkoituksena on arvioida nykyisten ja pilvipohjaisten radioliityntäverkkoarkkitehtuurien kustannuksia riippuen verkon rakenteesta ja liikenteestä. Tämä toteutetaan luomalla kustannusmalli, joka perustuu asiantuntijoiden haastatteluihin. Mallin avulla on mahdollista vertailla annetun verkon eri arkkitehtuurien kokonaiskustannuksia ja selvittää taloudellisin radioliityntäverkkoarkkitehtuuri verkolle. Mallinnuksen tulokset osoittavat, että pilvipohjaisen radioliityntäverkon taloudelliset hyödyt ovat riippuvaisia dataliikenteen kasvusta verkossa. Vähäisellä data-liikenteen kasvulla pilvipohjaisen radioliityntäverkon kustannusedut ovat kyseenalaiset, mutta suurella dataliikenteen kasvulla saadaan selviä säästöjä verrattuna nykyisiin arkkitehtuureihin
    corecore