182 research outputs found

    An Optimized Multi-Layer Resource Management in Mobile Edge Computing Networks: A Joint Computation Offloading and Caching Solution

    Full text link
    Nowadays, data caching is being used as a high-speed data storage layer in mobile edge computing networks employing flow control methodologies at an exponential rate. This study shows how to discover the best architecture for backhaul networks with caching capability using a distributed offloading technique. This article used a continuous power flow analysis to achieve the optimum load constraints, wherein the power of macro base stations with various caching capacities is supplied by either an intelligent grid network or renewable energy systems. This work proposes ubiquitous connectivity between users at the cell edge and offloading the macro cells so as to provide features the macro cell itself cannot cope with, such as extreme changes in the required user data rate and energy efficiency. The offloading framework is then reformed into a neural weighted framework that considers convergence and Lyapunov instability requirements of mobile-edge computing under Karush Kuhn Tucker optimization restrictions in order to get accurate solutions. The cell-layer performance is analyzed in the boundary and in the center point of the cells. The analytical and simulation results show that the suggested method outperforms other energy-saving techniques. Also, compared to other solutions studied in the literature, the proposed approach shows a two to three times increase in both the throughput of the cell edge users and the aggregate throughput per cluster

    Advanced Technologies Enabling Unlicensed Spectrum Utilization in Cellular Networks

    Get PDF
    As the rapid progress and pleasant experience of Internet-based services, there is an increasing demand for high data rate in wireless communications systems. Unlicensed spectrum utilization in Long Term Evolution (LTE) networks is a promising technique to meet the massive traffic demand. There are two effective methods to use unlicensed bands for delivering LTE traffic. One is offloading LTE traffic toWi-Fi. An alternative method is LTE-unlicensed (LTE-U), which aims to directly use LTE protocols and infrastructures over the unlicensed spectrum. It has also been pointed out that addressing the above two methods simultaneously could further improve the system performance. However, how to avoid severe performance degradation of the Wi-Fi network is a challenging issue of utilizing unlicensed spectrum in LTE networks. Specifically, first, the inter-system spectrum sharing, or, more specifically, the coexistence of LTE andWi-Fi in the same unlicensed spectrum is the major challenge of implementing LTE-U. Second, to use the LTE and Wi-Fi integration approach, mobile operators have to manage two disparate networks in licensed and unlicensed spectrum. Third, optimization for joint data offloading to Wi-Fi and LTE-U in multi- cell scenarios poses more challenges because inter-cell interference must be addressed. This thesis focuses on solving problems related to these challenges. First, the effect of bursty traffic in an LTE and Wi-Fi aggregation (LWA)-enabled network has been investigated. To enhance resource efficiency, the Wi-Fi access point (AP) is designed to operate in both the native mode and the LWA mode simultaneously. Specifically, the LWA-modeWi-Fi AP cooperates with the LTE base station (BS) to transmit bearers to the LWA user, which aggregates packets from both LTE and Wi-Fi. The native-mode Wi-Fi AP transmits Wi-Fi packets to those native Wi-Fi users that are not with LWA capability. This thesis proposes a priority-based Wi-Fi transmission scheme with congestion control and studied the throughput of the native Wi-Fi network, as well as the LWA user delay when the native Wi-Fi user is under heavy traffic conditions. The results provide fundamental insights in the throughput and delay behavior of the considered network. Second, the above work has been extended to larger topologies. A stochastic geometry model has been used to model and analyze the performance of an MPTCP Proxy-based LWA network with intra-tier and cross-tier dependence. Under the considered network model and the activation conditions of LWA-mode Wi-Fi, this thesis has obtained three approximations for the density of active LWA-mode Wi-Fi APs through different approaches. Tractable analysis is provided for the downlink (DL) performance evaluation of large-scale LWA networks. The impact of different parameters on the network performance have been analyzed, validating the significant gain of using LWA in terms of boosted data rate and improved spectrum reuse. Third, this thesis also takes a significant step of analyzing joint multi-cell LTE-U and Wi-Fi network, while taking into account different LTE-U and Wi-Fi inter-working schemes. In particular, two technologies enabling data offloading from LTE to Wi-Fi are considered, including LWA and Wi-Fi offloading in the context of the power gain-based user offloading scheme. The LTE cells in this work are subject to load-coupling due to inter-cell interference. New system frameworks for maximizing the demand scaling factor for all users in both Wi-Fi and multi-cell LTE networks have been proposed. The potential of networks is explored in achieving optimal capacity with arbitrary topologies, accounting for both resource limits and inter-cell interference. Theoretical analyses have been proposed for the proposed optimization problems, resulting in algorithms that achieve global optimality. Numerical results show the algorithms’ effectiveness and benefits of joint use of data offloading and the direct use of LTE over the unlicensed band. All the derived results in this thesis have been validated by Monte Carlo simulations in Matlab, and the conclusions observed from the results can provide guidelines for the future unlicensed spectrum utilization in LTE networks

    A review on green caching strategies for next generation communication networks

    Get PDF
    © 2020 IEEE. In recent years, the ever-increasing demand for networking resources and energy, fueled by the unprecedented upsurge in Internet traffic, has been a cause for concern for many service providers. Content caching, which serves user requests locally, is deemed to be an enabling technology in addressing the challenges offered by the phenomenal growth in Internet traffic. Conventionally, content caching is considered as a viable solution to alleviate the backhaul pressure. However, recently, many studies have reported energy cost reductions contributed by content caching in cache-equipped networks. The hypothesis is that caching shortens content delivery distance and eventually achieves significant reduction in transmission energy consumption. This has motivated us to conduct this study and in this article, a comprehensive survey of the state-of-the-art green caching techniques is provided. This review paper extensively discusses contributions of the existing studies on green caching. In addition, the study explores different cache-equipped network types, solution methods, and application scenarios. We categorically present that the optimal selection of the caching nodes, smart resource management, popular content selection, and renewable energy integration can substantially improve energy efficiency of the cache-equipped systems. In addition, based on the comprehensive analysis, we also highlight some potential research ideas relevant to green content caching

    Data-Driven Network Management for Next-Generation Wireless Networks

    Get PDF
    With the commercialization and maturity of the fifth-generation (5G) wireless networks, the next-generation wireless network (NGWN) is envisioned to provide seamless connectivity for mobile user terminals (MUTs) and to support a wide range of new applications with stringent quality of service (QoS) requirements. In the NGWN, the network architecture will be highly heterogeneous due to the integration of terrestrial networks, satellite networks, and aerial networks formed by unmanned aerial vehicles (UAVs), and the network environment becomes highly dynamic because of the mobility of MUTs and the spatiotemporal variation of service demands. In order to provide high-quality services in such dynamic and heterogeneous networks, flexible, fine-grained, and adaptive network management will be essential. Recent advancements in deep learning (DL) and digital twins (DTs) have made it possible to enable data-driven solutions to support network management in the NGWN. DL methods can solve network management problems by leveraging data instead of explicit mathematical models, and DTs can facilitate DL methods by providing extensive data based on the full digital representations created for individual MUTs. Data-driven solutions that integrates DL and DT can address complicated network management problems and explore implicit network characteristics to adapt to dynamic network environments in the NGWN. However, the design of data-driven network management solutions in the NGWN meets several technical challenges: 1) how the NGWN can be configured to support multiple services with different spatiotemporal service demands while simultaneously satisfying their different QoS requirements; 2) how the multi-dimensional network resources are proactively reserved to support MUTs with different mobility patterns in a resource-efficient manner; and 3) how the heterogeneous NGWN components, including base stations (BSs), satellites, and UAVs, jointly coordinate their network resources to support dynamic service demands, etc. In this thesis, we develop efficient data-driven network management strategies in two stages, i.e., long-term network planning and real-time network operation, to address the above challenges in the NGWN. Firstly, we investigate planning-stage network configuration to satisfy different service requirements for communication services. We consider a two-tier network with one macro BS and multiple small BSs, which supports communication services with different spatiotemporal data traffic distributions. The objective is to maximize the energy efficiency of BSs by jointly configuring downlink transmission power and communication coverage for each BS. To achieve this objective, we first design a network planning scheme with flexible binary slice zooming, dual time-scale planning, and grid-based network planning. The scheme allows flexibility to differentiate the communication coverage and downlink transmission power of the same BS for different services while improving the temporal and spatial granularity of network planning. We formulate a combinatorial optimization problem in which communication coverage management and power control are mutually dependent. To solve the problem, we propose a data-driven method with two steps: 1) we propose an unsupervised-learning-assisted approach to determine the communication coverage of BSs; and 2) we derive a closed-form solution for power control. Secondly, we investigate planning-stage resource reservation for a compute-intensive service to support MUTs with different mobility patterns. The MUTs can offload their computing tasks to the computing servers deployed at the core networks, gateways, and BSs. Each computing server requires both computing and storage resources to execute computing tasks. The objective is to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from re-configuring resource reservation. To this end, we develop a data-driven network planning scheme with two elements, i.e., multi-resource reservation and resource reservation re-configuration. First, DTs are designed for collecting MUT status data, based on which MUTs are grouped according to their mobility patterns. Then, an optimization algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a meta-learning-based approach is proposed to re-configure resource reservation for balancing the network resource usage and the re-configuration cost. Thirdly, we investigate operation-stage computing resource allocation in a space-air-ground integrated network (SAGIN). A UAV is deployed to fly around MUTs and collect their computing tasks, while scheduling the collected computing tasks to be processed at the UAV locally or offloaded to the nearby BSs or the remote satellite. The energy budget of the UAV, intermittent connectivity between the UAV and BSs, and dynamic computing task arrival pose challenges in computing task scheduling. The objective is to design a real-time computing task scheduling policy for minimizing the delay of computing task offloading and processing in the SAGIN. To achieve the objective, we first formulate the on-line computing scheduling in the dynamic network environment as a constrained Markov decision process. Then, we develop a risk-sensitive reinforcement learning approach in which a risk value is used to represent energy consumption that exceeds the budget. By balancing the risk value and the reward from delay minimization, the UAV can explore the task scheduling policy to minimize task offloading and processing delay while satisfying the UAV energy constraint. Extensive simulation have been conducted to demonstrate that the proposed data-driven network management approach for the NGWN can achieve flexible BS configuration for multiple communication services, fine-grained multi-dimensional resource reservation for a compute-intensive service, and adaptive computing resource allocation in the dynamic SAGIN. The schemes developed in the thesis are valuable to the data-driven network planning and operation in the NGWN

    A Comprehensive Survey on Resource Allocation for CRAN in 5G and Beyond Networks

    Get PDF
    The diverse service requirements coming with the advent of sophisticated applications as well as a large number of connected devices demand for revolutionary changes in the traditional distributed radio access network (RAN). To this end, Cloud-RAN (CRAN) is considered as an important paradigm to enhance the performance of the upcoming fifth generation (5G) and beyond wireless networks in terms of capacity, latency, and connectivity to a large number of devices. Out of several potential enablers, efficient resource allocation can mitigate various challenges related to user assignment, power allocation, and spectrum management in a CRAN, and is the focus of this paper. Herein, we provide a comprehensive review of resource allocation schemes in a CRAN along with a detailed optimization taxonomy on various aspects of resource allocation. More importantly, we identity and discuss the key elements for efficient resource allocation and management in CRAN, namely: user assignment, remote radio heads (RRH) selection, throughput maximization, spectrum management, network utility, and power allocation. Furthermore, we present emerging use-cases including heterogeneous CRAN, millimeter-wave CRAN, virtualized CRAN, Non- Orthogonal Multiple Access (NoMA)-based CRAN and fullduplex enabled CRAN to illustrate how their performance can be enhanced by adopting CRAN technology. We then classify and discuss objectives and constraints involved in CRAN-based 5G and beyond networks. Moreover, a detailed taxonomy of optimization methods and solution approaches with different objectives is presented and discussed. Finally, we conclude the paper with several open research issues and future directions

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions
    corecore