201 research outputs found

    Advanced Technologies Enabling Unlicensed Spectrum Utilization in Cellular Networks

    Get PDF
    As the rapid progress and pleasant experience of Internet-based services, there is an increasing demand for high data rate in wireless communications systems. Unlicensed spectrum utilization in Long Term Evolution (LTE) networks is a promising technique to meet the massive traffic demand. There are two effective methods to use unlicensed bands for delivering LTE traffic. One is offloading LTE traffic toWi-Fi. An alternative method is LTE-unlicensed (LTE-U), which aims to directly use LTE protocols and infrastructures over the unlicensed spectrum. It has also been pointed out that addressing the above two methods simultaneously could further improve the system performance. However, how to avoid severe performance degradation of the Wi-Fi network is a challenging issue of utilizing unlicensed spectrum in LTE networks. Specifically, first, the inter-system spectrum sharing, or, more specifically, the coexistence of LTE andWi-Fi in the same unlicensed spectrum is the major challenge of implementing LTE-U. Second, to use the LTE and Wi-Fi integration approach, mobile operators have to manage two disparate networks in licensed and unlicensed spectrum. Third, optimization for joint data offloading to Wi-Fi and LTE-U in multi- cell scenarios poses more challenges because inter-cell interference must be addressed. This thesis focuses on solving problems related to these challenges. First, the effect of bursty traffic in an LTE and Wi-Fi aggregation (LWA)-enabled network has been investigated. To enhance resource efficiency, the Wi-Fi access point (AP) is designed to operate in both the native mode and the LWA mode simultaneously. Specifically, the LWA-modeWi-Fi AP cooperates with the LTE base station (BS) to transmit bearers to the LWA user, which aggregates packets from both LTE and Wi-Fi. The native-mode Wi-Fi AP transmits Wi-Fi packets to those native Wi-Fi users that are not with LWA capability. This thesis proposes a priority-based Wi-Fi transmission scheme with congestion control and studied the throughput of the native Wi-Fi network, as well as the LWA user delay when the native Wi-Fi user is under heavy traffic conditions. The results provide fundamental insights in the throughput and delay behavior of the considered network. Second, the above work has been extended to larger topologies. A stochastic geometry model has been used to model and analyze the performance of an MPTCP Proxy-based LWA network with intra-tier and cross-tier dependence. Under the considered network model and the activation conditions of LWA-mode Wi-Fi, this thesis has obtained three approximations for the density of active LWA-mode Wi-Fi APs through different approaches. Tractable analysis is provided for the downlink (DL) performance evaluation of large-scale LWA networks. The impact of different parameters on the network performance have been analyzed, validating the significant gain of using LWA in terms of boosted data rate and improved spectrum reuse. Third, this thesis also takes a significant step of analyzing joint multi-cell LTE-U and Wi-Fi network, while taking into account different LTE-U and Wi-Fi inter-working schemes. In particular, two technologies enabling data offloading from LTE to Wi-Fi are considered, including LWA and Wi-Fi offloading in the context of the power gain-based user offloading scheme. The LTE cells in this work are subject to load-coupling due to inter-cell interference. New system frameworks for maximizing the demand scaling factor for all users in both Wi-Fi and multi-cell LTE networks have been proposed. The potential of networks is explored in achieving optimal capacity with arbitrary topologies, accounting for both resource limits and inter-cell interference. Theoretical analyses have been proposed for the proposed optimization problems, resulting in algorithms that achieve global optimality. Numerical results show the algorithms’ effectiveness and benefits of joint use of data offloading and the direct use of LTE over the unlicensed band. All the derived results in this thesis have been validated by Monte Carlo simulations in Matlab, and the conclusions observed from the results can provide guidelines for the future unlicensed spectrum utilization in LTE networks

    LTE and Wi-Fi Coexistence in Unlicensed Spectrum with Application to Smart Grid: A Review

    Full text link
    Long Term Evolution (LTE) is expanding its utilization in unlicensed band by deploying LTE Unlicensed (LTEU) and Licensed Assisted Access LTE (LTE-LAA) technology. Smart Grid can take the advantages of unlicensed bands for achieving two-way communication between smart meters and utility data centers by using LTE-U/LTE-LAA. However, both schemes must co-exist with the incumbent Wi-Fi system. In this paper, several co-existence schemes of Wi-Fi and LTE technology is comprehensively reviewed. The challenges of deploying LTE and Wi-Fi in the same band are clearly addressed based on the papers reviewed. Solution procedures and techniques to resolve the challenging issues are discussed in a short manner. The performance of various network architectures such as listenbefore- talk (LBT) based LTE, carrier sense multiple access with collision avoidance (CSMA/CA) based Wi-Fi is briefly compared. Finally, an attempt is made to implement these proposed LTEWi- Fi models in smart grid technology.Comment: submitted in 2018 IEEE PES T&

    SDN-assisted efficient LTE-WiFi aggregation in next generation IoT networks

    Get PDF
    Currently, the increasing demands of user terminals has surged drastically and pulling up the global data traffic along. According to 3GPP, offloading is one of the most beneficial and advantageous options to handle this critical traffic bottleneck, however, both Long Term Evolution (LTE) and Wireless Local Area Network (WLAN) are loosely coupled. To mitigate the User Equipment (UE) from latency issues during offloading and for tighter integration of LTE and WLAN radio networks, LTE-WLAN Aggregation (LWA) was introduced by 3GPP which is apparently suitable for Internet of Things (IoT) devices. However, LWA is not suitable for high mobility scenarios as UEs’ information need to be updated for every new environment because of the frequent aggregation triggers which are mostly non-optimal and demands for a high-level controller. To resolve the disadvantage of non-optimal aggregation triggers, in this paper, we proposed Software Defined Networking (SDN) based approach for LWA, named as LWA under SDN Assistance (LWA-SA). In this approach, SDN initiates aggregation appropriately between LTE and an optimal WLAN Access Point (AP) which avoids frequent reconnections and deprived services. As multiple parameters are required for selection of an optimal WLAN AP, so we use Genetic Algorithm (GA) that considers each parameter as fitness value for the selection of optimal WLAN AP. This maximizes the throughput of UE and reduces the traffic pressure over licensed spectrum. Further, mathematical model is formulated that uses Karush-Kuhn-Tucker (KKT) to find the maximum attainable throughput of a UE. Using NS-3, we compared our approach with offloading scenarios and LWA. The simulation results clearly depict that LWA-SA outperforms existing schemes and achieves higher throughput

    An Innovative RAN Architecture for Emerging Heterogeneous Networks: The Road to the 5G Era

    Full text link
    The global demand for mobile-broadband data services has experienced phenomenal growth over the last few years, driven by the rapid proliferation of smart devices such as smartphones and tablets. This growth is expected to continue unabated as mobile data traffic is predicted to grow anywhere from 20 to 50 times over the next 5 years. Exacerbating the problem is that such unprecedented surge in smartphones usage, which is characterized by frequent short on/off connections and mobility, generates heavy signaling traffic load in the network signaling storms . This consumes a disproportion amount of network resources, compromising network throughput and efficiency, and in extreme cases can cause the Third-Generation (3G) or 4G (long-term evolution (LTE) and LTE-Advanced (LTE-A)) cellular networks to crash. As the conventional approaches of improving the spectral efficiency and/or allocation additional spectrum are fast approaching their theoretical limits, there is a growing consensus that current 3G and 4G (LTE/LTE-A) cellular radio access technologies (RATs) won\u27t be able to meet the anticipated growth in mobile traffic demand. To address these challenges, the wireless industry and standardization bodies have initiated a roadmap for transition from 4G to 5G cellular technology with a key objective to increase capacity by 1000Ã? by 2020 . Even though the technology hasn\u27t been invented yet, the hype around 5G networks has begun to bubble. The emerging consensus is that 5G is not a single technology, but rather a synergistic collection of interworking technical innovations and solutions that collectively address the challenge of traffic growth. The core emerging ingredients that are widely considered the key enabling technologies to realize the envisioned 5G era, listed in the order of importance, are: 1) Heterogeneous networks (HetNets); 2) flexible backhauling; 3) efficient traffic offload techniques; and 4) Self Organizing Networks (SONs). The anticipated solutions delivered by efficient interworking/ integration of these enabling technologies are not simply about throwing more resources and /or spectrum at the challenge. The envisioned solution, however, requires radically different cellular RAN and mobile core architectures that efficiently and cost-effectively deploy and manage radio resources as well as offload mobile traffic from the overloaded core network. The main objective of this thesis is to address the key techno-economics challenges facing the transition from current Fourth-Generation (4G) cellular technology to the 5G era in the context of proposing a novel high-risk revolutionary direction to the design and implementation of the envisioned 5G cellular networks. The ultimate goal is to explore the potential and viability of cost-effectively implementing the 1000x capacity challenge while continuing to provide adequate mobile broadband experience to users. Specifically, this work proposes and devises a novel PON-based HetNet mobile backhaul RAN architecture that: 1) holistically addresses the key techno-economics hurdles facing the implementation of the envisioned 5G cellular technology, specifically, the backhauling and signaling challenges; and 2) enables, for the first time to the best of our knowledge, the support of efficient ground-breaking mobile data and signaling offload techniques, which significantly enhance the performance of both the HetNet-based RAN and LTE-A\u27s core network (Evolved Packet Core (EPC) per 3GPP standard), ensure that core network equipment is used more productively, and moderate the evolving 5G\u27s signaling growth and optimize its impact. To address the backhauling challenge, we propose a cost-effective fiber-based small cell backhaul infrastructure, which leverages existing fibered and powered facilities associated with a PON-based fiber-to-the-Node/Home (FTTN/FTTH)) residential access network. Due to the sharing of existing valuable fiber assets, the proposed PON-based backhaul architecture, in which the small cells are collocated with existing FTTN remote terminals (optical network units (ONUs)), is much more economical than conventional point-to-point (PTP) fiber backhaul designs. A fully distributed ring-based EPON architecture is utilized here as the fiber-based HetNet backhaul. The techno-economics merits of utilizing the proposed PON-based FTTx access HetNet RAN architecture versus that of traditional 4G LTE-A\u27s RAN will be thoroughly examined and quantified. Specifically, we quantify the techno-economics merits of the proposed PON-based HetNet backhaul by comparing its performance versus that of a conventional fiber-based PTP backhaul architecture as a benchmark. It is shown that the purposely selected ring-based PON architecture along with the supporting distributed control plane enable the proposed PON-based FTTx RAN architecture to support several key salient networking features that collectively significantly enhance the overall performance of both the HetNet-based RAN and 4G LTE-A\u27s core (EPC) compared to that of the typical fiber-based PTP backhaul architecture in terms of handoff capability, signaling overhead, overall network throughput and latency, and QoS support. It will also been shown that the proposed HetNet-based RAN architecture is not only capable of providing the typical macro-cell offloading gain (RAN gain) but also can provide ground-breaking EPC offloading gain. The simulation results indicate that the overall capacity of the proposed HetNet scales with the number of deployed small cells, thanks to LTE-A\u27s advanced interference management techniques. For example, if there are 10 deployed outdoor small cells for every macrocell in the network, then the overall capacity will be approximately 10-11x capacity gain over a macro-only network. To reach the 1000x capacity goal, numerous small cells including 3G, 4G, and WiFi (femtos, picos, metros, relays, remote radio heads, distributed antenna systems) need to be deployed indoors and outdoors, at all possible venues (residences and enterprises)

    Towards More Efficient 5G Networks via Dynamic Traffic Scheduling

    Get PDF
    Department of Electrical EngineeringThe 5G communications adopt various advanced technologies such as mobile edge computing and unlicensed band operations, to meet the goal of 5G services such as enhanced Mobile Broadband (eMBB) and Ultra Reliable Low Latency Communications (URLLC). Specifically, by placing the cloud resources at the edge of the radio access network, so-called mobile edge cloud, mobile devices can be served with lower latency compared to traditional remote-cloud based services. In addition, by utilizing unlicensed spectrum, 5G can mitigate the scarce spectrum resources problem thus leading to realize higher throughput services. To enhance user-experienced service quality, however, aforementioned approaches should be more fine-tuned by considering various network performance metrics altogether. For instance, the mechanisms for mobile edge computing, e.g., computation offloading to the edge cloud, should not be optimized in a specific metric's perspective like latency, since actual user satisfaction comes from multi-domain factors including latency, throughput, monetary cost, etc. Moreover, blindly combining unlicensed spectrum resources with licensed ones does not always guarantee the performance enhancement, since it is crucial for unlicensed band operations to achieve peaceful but efficient coexistence with other competing technologies (e.g., Wi-Fi). This dissertation proposes a focused resource management framework for more efficient 5G network operations as follows. First, Quality-of-Experience is adopted to quantify user satisfaction in mobile edge computing, and the optimal transmission scheduling algorithm is derived to maximize user QoE in computation offloading scenarios. Next, regarding unlicensed band operations, two efficient mechanisms are introduced to improve the coexistence performance between LTE-LAA and Wi-Fi networks. In particular, we develop a dynamic energy-detection thresholding algorithm for LTE-LAA so that LTE-LAA devices can detect Wi-Fi frames in a lightweight way. In addition, we propose AI-based network configuration for an LTE-LAA network with which an LTE-LAA operator can fine-tune its coexistence parameters (e.g., CAA threshold) to better protect coexisting Wi-Fi while achieving enhanced performance than the legacy LTE-LAA in the standards. Via extensive evaluations using computer simulations and a USRP-based testbed, we have verified that the proposed framework can enhance the efficiency of 5G.clos

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions
    corecore