98 research outputs found
Energy-efficient resource allocation in limited fronthaul capacity cloud-radio access networks
In recent years, cloud radio access networks (C-RANs) have demonstrated their role as a formidable technology candidate to address the challenging issues from the advent of Fifth Generation (5G) mobile networks. In C-RANs, the modules which are capable of processing data and handling radio signals are physically separated in two main functional groups: the baseband unit (BBU) pool consisting of multiple BBUs on the cloud, and the radio access networks (RANs) consisting of several low-power remote radio heads (RRH) whose functionality are simplified with radio transmission/reception. Thanks to the centralized computation capability of cloud computing, C-RANs enable the coordination between RRHs to significantly improve the achievable spectral efficiency to satisfy the explosive traffic demand from users. More importantly, this enhanced performance can be attained at its power-saving mode, which results in the energy-efficient C-RAN perspective. Note that such improvement can be achieved under an ideal fronthaul condition of very high and stable capacity. However, in practice, dedicated fronthaul links must remarkably be divided to connect a large amount of RRHs to the cloud, leading to a scenario of non-ideal limited fronthaul capacity for each RRH. This imposes a certain upper-bound on each user’s spectral efficiency, which limits the promising achievement of C-RANs. To fully harness the energy-efficient C-RANs while respecting their stringent limited fronthaul capacity characteristics, a more appropriate and efficient network design is essential.
The main scope of this thesis aims at optimizing the green performance of C-RANs in terms of energy-efficiency under the non-ideal fronthaul capacity condition, namely energy-efficient design in limited fronthaul capacity C-RANs. Our study, via jointly determining the transmit beamforming, RRH selection, and RRH–user association, targets the following three vital design issues: the optimal trade-off between maximizing achievable sum rate and minimizing total power consumption, the maximum energy-efficiency under adaptive rate-dependent power model, the optimal joint energy-efficient design of virtual computing along with the radio resource allocation in virtualized C-RANs. The significant contributions and novelties of this work can be elaborated in the followings.
Firstly, the joint design of transmit beamforming, RRH selection, and RRH–user association to optimize the trade-off between user sum rate maximization and total power consumption minimization in the downlink transmissions of C-RANs is presented in Chapter 3. We develop one powerful with high-complexity and two novel efficient low-complexity algorithms to respectively solve for a global optimal and high-quality sub-optimal solutions. The findings in this chapter show that the proposed algorithms, besides overcoming the burden to solve difficult non-convex problems within a polynomial time, also outperform the techniques in the literature in terms of convergence and achieved network performance.
Secondly, Chapter 4 proposes a novel model reflecting the dependence of consumed power on the user data rate and highlights its impact through various energy-efficiency metrics in CRANs. The dominant performance of the results form Chapter 4, compared to the conventional work without adaptive rate-dependent power model, corroborates the importance of the newly proposed model in appropriately conserving the system power to achieve the most energy efficient C-RAN performance.
Finally, we propose a novel model on the cloud center which enables the virtualization and adaptive allocation of computing resources according to the data traffic demand to conserve more power in Chapter 5. A problem of jointly designing the virtual computing resource together with the beamforming, RRH selection, and RRH–user association which maximizes the virtualized C-RAN energy-efficiency is considered. To cope with the huge size of the formulated optimization problem, a novel efficient with much lower-complexity algorithm compared to previous work is developed to achieve the solution. The achieved results from different evaluations demonstrate the superiority of the proposed designs compared to the conventional work
Network-Aided Intelligent Traffic Steering in 6G O-RAN: A Multi-Layer Optimization Framework
To enable an intelligent, programmable and multi-vendor radio access network
(RAN) for 6G networks, considerable efforts have been made in standardization
and development of open RAN (O-RAN). So far, however, the applicability of
O-RAN in controlling and optimizing RAN functions has not been widely
investigated. In this paper, we jointly optimize the flow-split distribution,
congestion control and scheduling (JFCS) to enable an intelligent traffic
steering application in O-RAN. Combining tools from network utility
maximization and stochastic optimization, we introduce a multi-layer
optimization framework that provides fast convergence, long-term
utility-optimality and significant delay reduction compared to the
state-of-the-art and baseline RAN approaches. Our main contributions are
three-fold: i) we propose the novel JFCS framework to efficiently and
adaptively direct traffic to appropriate radio units; ii) we develop
low-complexity algorithms based on the reinforcement learning, inner
approximation and bisection search methods to effectively solve the JFCS
problem in different time scales; and iii) the rigorous theoretical performance
results are analyzed to show that there exists a scaling factor to improve the
tradeoff between delay and utility-optimization. Collectively, the insights in
this work will open the door towards fully automated networks with enhanced
control and flexibility. Numerical results are provided to demonstrate the
effectiveness of the proposed algorithms in terms of the convergence rate,
long-term utility-optimality and delay reduction.Comment: 15 pages, 10 figures. A short version will be submitted to IEEE
GLOBECOM 202
Resource management with adaptive capacity in C-RAN
This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively
Network-Aided Intelligent Traffic Steering in 6G O-RAN: A Multi-Layer Optimization Framework
peer reviewedTo enable an intelligent, programmable and multi-vendor radio access network (RAN) for 6G networks, considerable efforts have been made in standardization and development of open RAN (O-RAN). So far, however, the applicability of O-RAN in controlling and optimizing RAN functions has not been widely investigated. In this paper, we jointly optimize the flow-split distribution, congestion control and scheduling (JFCS) to enable an intelligent traffic steering application in O-RAN. Combining tools from network utility maximization and stochastic optimization, we introduce a multi-layer optimization framework that provides fast convergence, long-term utility-optimality and significant delay reduction compared to the state-of-the-art and baseline RAN approaches. Our main contributions are three-fold: i ) we propose the novel JFCS framework to efficiently and adaptively direct traffic to appropriate radio units; ii ) we develop low-complexity algorithms based on the reinforcement learning, inner approximation and bisection search methods to effectively solve the JFCS problem in different time scales; and iii ) the rigorous theoretical performance results are analyzed to show that there exists a scaling factor to improve the tradeoff between delay and utility-optimization. Collectively, the insights in this work will open the door towards fully automated networks with enhanced control and flexibility. Numerical results are provided to demonstrate the effectiveness of the proposed algorithms in terms of the convergence rate, long-term utility-optimality and delay reduction
Recommended from our members
Algorithms and Experimentation for Future Wireless Networks: From Internet-of-Things to Full-Duplex
Future and next-generation wireless networks are driven by the rapidly growing wireless traffic stemming from diverse services and applications, such as the Internet-of-Things (IoT), virtual reality, autonomous vehicles, and smart intersections. Many of these applications require massive connectivity between IoT devices as well as wireless access links with ultra-high bandwidth (Gbps or above) and ultra-low latency (10ms or less). Therefore, realizing the vision of future wireless networks requires significant research efforts across all layers of the network stack. In this thesis, we use a cross-layer approach and focus on several critical components of future wireless networks including IoT systems and full-duplex (FD) wireless, and on experimentation with advanced wireless technologies in the NSF PAWR COSMOS testbed.
First, we study tracking and monitoring applications in the IoT and focus on ultra-low-power energy harvesting networks. Based on realistic hardware characteristics, we design and optimize Panda, a centralized probabilistic protocol for maximizing the neighbor discovery rate between energy harvesting nodes under a power budget. Via testbed evaluation using commercial off-the-shelf energy harvesting nodes, we show that Panda outperforms existing protocols by up to 3x in terms of the neighbor discovery rate. We further explore this problem and consider a general throughput maximization problem among a set of heterogeneous energy-constrained ultra-low-power nodes. We analytically identify the theoretical fundamental limits of the rate at which data can be exchanged between these nodes, and design the distributed probabilistic protocol, EconCast, which approaches the maximum throughput in the limiting sense. Performance evaluations of EconCast using both simulations and real-world experiments show that it achieves up to an order of magnitude higher throughput than Panda and other known protocols.
We then study FD wireless - simultaneous transmission and reception at the same frequency - a key technology that can significantly improve the data rate and reduce communication latency by employing self-interference cancellation (SIC). In particular, we focus on enabling FD on small-form-factor devices leveraging the technique of frequency-domain equalization (FDE). We design, model, and optimize the FDE-based RF canceller, which can achieve >50dB RF SIC across 20MHz bandwidth, and experimentally show that our prototyped FD radios can achieve a link-level throughput gain of 1.85-1.91x. We also focus on combining FD with phased arrays, employing optimized transmit and receive beamforming, where the spatial degrees of freedom in multi-antenna systems are repurposed to achieve wideband RF SIC. Moving up in the network stack, we study heterogeneous networks with half-duplex and FD users, and develop the novel Hybrid-Greedy Maximum Scheduling (H-GMS) algorithm, which achieves throughput optimality in a distributed manner. Analytical and simulation results show that H-GMS achieves 5-10x better delay performance and improved fairness compared with state-of-the-art approaches.
Finally, we described experimentation and measurements in the city-scale COSMOS testbed being deployed in West Harlem, New York City. COSMOS' key building blocks include software-defined radios, millimeter-wave radios, a programmable optical network, and edge cloud, and their convergence will enable researchers to remotely explore emerging technologies in a real world environment. We provide a brief overview of the testbed and focus on experimentation with advanced technologies, including the integrating of open-access FD radios in the testbed and a pilot study on converged optical-wireless x-haul networking for cloud radio access networks (C-RANs). We also present an extensive 28GHz channel measurements in the testbed area, which is a representative dense urban canyon environment, and study the corresponding signal-to-noise ratio (SNR) coverage and achievable data rates. The results of this part helped drive and validate the design of the COSMOS testbed, and can inform further deployment and experimentation in the testbed.
In this thesis, we make several theoretical and experimental contributions to ultra-low-power energy harvesting networks and the IoT, and FD wireless. We also contribute to the experimentation and measurements in the COSMOS advanced wireless testbed. We believe that these contributions are essential to connect fundamental theory to practical systems, and ultimately to real-world applications, in future wireless networks
- …