14 research outputs found
Energy-Efficient Approach for Beamforming Design and BBUs Aggregation in Heterogeneous C-RAN
Heterogeneous cloud radio access network (C-RAN) that combines the benefits of C-RAN and multi-tier heterogeneous networks (HetNets) is emerged as a novel network solution for improving both spectrum and energy efficiencies. In this work, we propose an energy-efficient approach that considers both RRHs beamforming design and BBUs aggregation in heterogeneous C-RAN. First, the problem is formulated as an optimization problem to maximize the weighted energy efficiency utility function subject to BBU processing capability, per-user quality-ofservice (QoS) requirement and per-RRH total transmit power constraints. Then, the optimization problem is decomposed into two sub problems. The first sub problem is a transmit beamforming vectors optimization problem. This problem is transformed to a weighted sum mean square error (MSE) minimization problem and solved using a weighted minimum mean square error (WMMSE) algorithm. The second sub problem is the BBUs aggregation problem. This problem is converted to a bin packing problem and we propose an algorithm based on the Best-Fit-Decreasing method to solve it. In order to show the effectiveness of the proposed energy-efficient approach, we compare it with different algorithms that are presented in the literature
User Association With Mode Selection in LWA-Based Multi-RAT HetNet
Restrained by the long term evolution (LTE) limited network capacity, WiFi technology is considered as one of the promising solutions to leverage the traffic load and enhance the LTE capacity. Exploiting both the licensed and unlicensed spectrum was the motivated key to standardize the LTE-wireless local area network (WLAN) aggregation (LWA) technology by 3GPP in Release 13. In this paper, we consider the user association problem in LWA-based Multiple Radio Access Technologies (Multi-RAT) Heterogeneous Networks (HetNet) in which three transmission modes are available (LTE, WiFi, and aggregation mode) and the user needs to select not only the wireless node that will associate with it, but also the used transmission mode. For this, a new user association algorithm that considers the joint node and mode selection is proposed in this paper. This association process is formulated as an optimization problem with the aim to maximize total network throughput. To solve this problem, a one-to-many matching game-based association algorithm is designed, where each user is matched to the best transmission mode/node according to well-developed utility function that considers the achieved data rate of each user as well as the proportional fairness among users. Simulation results have shown that our proposed algorithm outperforms comparable association techniques such as WLAN first, LTE first, and LTE-W in terms of system throughput, outage probability and fairness between users
Backhaul Effect on User Association in cellular and WiFi Networks
The huge growth of data traffic has led to a continuous increase in capacity demands in mobile networks. A cost effective approach is to offload data traffic to WiFi network for enhancing the network capacity and improving the system performance. In this paper, we study the effect of the network backhaul on the selection decision for this network. We are dealing with a single and multi-hop wireless backhaul transmission under different circumstances and different numbers of associated users. One of MADM algorithms which is widely adopted is Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). We use TOPSIS in order to arrange the available networks, according to several metrics and based on the user priorities and requirements. Thus helping for optimum user-network association that will improve the system performance while satisfying the user requirements. The analytical results show the importance of taking the backhaul effect during the user association process and how the backhaul link effects the overall performance
Optimal Radio Access Network Selection in Multi-RAT HetNets Using Matching Game Approach
Due to the dramatic growth in mobile data traffic, Multiple Radio Access Technologies (Multi-RAT) heterogeneous Networks (HetNets) have been proposed as a promising solution to cope with the high traffic demand in mobile networks. In this work we propose a User Equipment (UE) radio access network selection algorithm in a Wireless Local Area Network (WLAN) and LTE Multi-RAT HetNet, where matching game approach is applied. In this algorithm, UEs propose to their best candidate based on a utility function that is formulated to maximize their achieved downlink data rate. Then base stations accept or reject the proposals based on their utility. The performance of the proposed approach is investigated and compared to other models, and simulation results proved its outperformance
Optimal Placement of Heterogeneous Wireless Nodes in LTE/WiFi Integrated Networks
Small cell deployment is considered one of solutions to increase capacity and improve the coverage to meet the growing of mobile data traffic. LTE-Wi-Fi Aggregation (LWA) is Heterogeneous based which enables an aggregation between LTE small cells and Wi-Fi Access Points (AP) at radio network level. In this work, we investigate the problems of determining the optimal placement of different types of Heterogeneous wireless small nodes that will be deployed to cover a hotspot zone. The placement is optimized with the objective of (i) Maximizing the total system capacity while considering the minimum Signal-to-Interferenceplus-Noise Ratio (SINR) constrain (ii) choosing the optimal number of wireless nodes which guarantee the coverage. The objective function are formulated as Mixed Integer Non-Linear Programming (MINLP) problem and solved using the genetic algorithm. The proposed optimal deployment is compared to uniform distribution placement and showed a significant increase in system throughput
Joint trajectory and CoMP clustering optimization in UAV-assisted cellular systems: a coalition formation game approach
Abstract In this paper, the flexibility of unmanned aerial vehicles (UAVs), as well as the benefits of coordinated multi-point (CoMP) transmission, are utilized for mitigating the interference in cellular networks. Specifically, the joint problem of CoMP clusters and UAVs’ trajectories is addressed for downlink transmission in a UAV-assisted cellular system. The problem is presented as a non-convex optimization problem that aims to maximize the sum rate of the ground users by taking into account the clustering, UAV mobility and backhaul capacity constraints. Since the formulated problem is known to be NP-hard, we partition it into two sub-problems. Particularly, by using coalitional game theory, the CoMP clusters are obtained with a given UAVs’ trajectories. Then, UAVs’ trajectories are optimized with given CoMP clusters using successive convex approximation technique. Based on the block coordinate descent method, the two sub-problems are solved alternatively until convergence. Numerical results are conducted and demonstrated the effectiveness of the proposed algorithm
Ambient Backscatter Communication System Empowered by Matching Game and Machine Learning for Enabling Massive IoT Over 6G HetNets
Ambient backscatter communication (ABC) is considered as a promising paradigm for meeting the 6G massive Internet of Things (IoT) requirements which is expected to revolutionize our world. In this paper, a new multimode matching game and machine learning-based IoT ambient backscatter communication scheme is proposed to maximize the ABC system rate and capacity over the LTE and Wi-Fi multi-RAT heterogeneous network, thereby supporting the 6G green massive IoT communication. The proposed algorithm is designed to support different rate and capacity requirements for different massive Machine Type Communication (mMTC) use cases such as sensor networks, smart grid, agriculture and low data rate Ultra Reliable Low Latency Communication (URLLC) use cases such as tactile interaction. The proposed optimization algorithm runs into two phases, the first one is a matching game-based algorithm that selects the optimum association between the IoT tags and the primary users (PU) downlink signals from a specific base station which maximizes the IoT tags rate while minimizing the resulting interference to the PU. Each IoT tag can ride the PU downlink signal using one of three different riding modes according to the required IoT ABC system rate and capacity, whereas mode 1 allows multiple IoT tags to ride the whole PU downlink signal resource blocks, in mode 2 each IoT tag can ride only one subcarrier from the PU downlink signal resource blocks, while in mode 3 multiple IoT tags can ride the same subcarrier from the PU downlink signal resource blocks. In addition, unmanned aerial vehicles (UAVs) flying HetNodes equipped with LTE and Wi-Fi receivers are used as backscatter receivers to receive the IoT tags uplink backscattered signals, so the second optimization phase is formulated to maximize the total sum rate of the ABC system by dividing its service area into clusters using the enhanced unsupervised k-means algorithm, also the enhanced k-means algorithm finds the optimum location of each cluster’s serving UAV flying HetNode that maximizes the channels gain between the IoT tags and the serving UAV flying HetNode in order to maximize the total system rate. The system model was implemented within the MATLAB environment where simulations across the various scenarios are conducted to assess the effectiveness of the proposed algorithm. Simulation results and the performance analysis demonstrated that the proposed algorithm can support the required rate for the most mMTC and low data rate URLLC IoT applications with average IoT tag rates in the range of 15 Kbps to 115 Kbps, and outperforms the algorithm-free riding technique in the case of massive IoT applications. The proposed mode 2 (first enhanced mode) achieves the best performance in terms of the average IoT tags rate and the total system rate with the lowest interference to the primary system users, on the other hand, mode 3 (second enhanced mode) improves the system capacity with maximum IoT tags satisfaction ratio. The capacity and satisfaction ratio of the proposed mode 3 outperforms mode 1 by 300 % and 138% respectively and outperforms mode 2 by 2,000 % and 420 % respectively. The proposed algorithm reduces the interference power to the PUs on the average by relative to the algorithm-free riding technique. From the result, we can conclude that the proposed algorithm supports different IoT applications and achieves the required data rates with minimal effect on the primary system keeping the PU’s data rate within the required range compared to the algorithm-free riding technique with the cost of higher time complexity
Joint Q-Learning Based Resource Allocation and Multi-Numerology B5G Network Slicing Exploiting LWA Technology
The emergence of the sixth generation (6G) era has highlighted the importance of Network Slicing (NS) technology as a promising solution for catering the diverse service requests of users. With the presence of a large number of devices with different service requests and since each service has different goals and requirements; efficiently allocating Resource Blocks (RBs) to each network slice is a challenging task to meet the desired Quality of Service (QoS) standards. However, it is worth noting that the majority of research efforts have primarily concentrated on cellular technologies, leaving behind the potential benefits of utilizing unlicensed bands to alleviate traffic congestion and enhance the capacity of existing LTE networks. In this paper we propose a novel idea by exploiting LTE-WLAN Aggregation technology (LWA) in Multi-Radio Access Technology (RAT) Heterogeneous Networks (HetNet), aiming to solve radio resource allocation problem based on the Radio Access Network (RAN) slicing and 5G New Radio (NR) scalable numerology technique. A joint optimization problem is proposed by jointly finding an efficient resource allocation ratio for each slice in each Base Station (BS) and by finding the optimum value of scalable numerology with the objective of maximizing users’ satisfaction. In order to solve this problem, a novel three-stage framework is proposed which is based on channel state information as a pre-association stage, Reinforcement Learning (RL) algorithm as finding the optimum value of slice resource ratio and scalable numerology, and finally Regret Learning Algorithm (RLA) as users’ re-association phase. Furthermore, a comprehensive performance evaluation is conducted against different baseline approaches. The simulation results show that our proposed model balances and achieves improvement in users’ satisfaction by deploying the proposed Multi-RAT Het-Net architecture that leverages LWA technology