709 research outputs found
On the Statistical Multiplexing Gain of Virtual Base Station Pools
Facing the explosion of mobile data traffic, cloud radio access network
(C-RAN) is proposed recently to overcome the efficiency and flexibility
problems with the traditional RAN architecture by centralizing baseband
processing. However, there lacks a mathematical model to analyze the
statistical multiplexing gain from the pooling of virtual base stations (VBSs)
so that the expenditure on fronthaul networks can be justified. In this paper,
we address this problem by capturing the session-level dynamics of VBS pools
with a multi-dimensional Markov model. This model reflects the constraints
imposed by both radio resources and computational resources. To evaluate the
pooling gain, we derive a product-form solution for the stationary distribution
and give a recursive method to calculate the blocking probabilities. For
comparison, we also derive the limit of resource utilization ratio as the pool
size approaches infinity. Numerical results show that VBS pools can obtain
considerable pooling gain readily at medium size, but the convergence to large
pool limit is slow because of the quickly diminishing marginal pooling gain. We
also find that parameters such as traffic load and desired Quality of Service
(QoS) have significant influence on the performance of VBS pools.Comment: Accepted by GlobeCom'1
Fronthaul evolution: From CPRI to Ethernet
It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved
Statistical Multiplexing and Traffic Shaping Games for Network Slicing
Next generation wireless architectures are expected to enable slices of
shared wireless infrastructure which are customized to specific mobile
operators/services. Given infrastructure costs and the stochastic nature of
mobile services' spatial loads, it is highly desirable to achieve efficient
statistical multiplexing amongst such slices. We study a simple dynamic
resource sharing policy which allocates a 'share' of a pool of (distributed)
resources to each slice-Share Constrained Proportionally Fair (SCPF). We give a
characterization of SCPF's performance gains over static slicing and general
processor sharing. We show that higher gains are obtained when a slice's
spatial load is more 'imbalanced' than, and/or 'orthogonal' to, the aggregate
network load, and that the overall gain across slices is positive. We then
address the associated dimensioning problem. Under SCPF, traditional network
dimensioning translates to a coupled share dimensioning problem, which
characterizes the existence of a feasible share allocation given slices'
expected loads and performance requirements. We provide a solution to robust
share dimensioning for SCPF-based network slicing. Slices may wish to
unilaterally manage their users' performance via admission control which
maximizes their carried loads subject to performance requirements. We show this
can be modeled as a 'traffic shaping' game with an achievable Nash equilibrium.
Under high loads, the equilibrium is explicitly characterized, as are the gains
in the carried load under SCPF vs. static slicing. Detailed simulations of a
wireless infrastructure supporting multiple slices with heterogeneous mobile
loads show the fidelity of our models and range of validity of our high load
equilibrium analysis
Resource management with adaptive capacity in C-RAN
This work was supported in part by the Spanish ministry of science through the projectRTI2018-099880-B-C32, with ERFD funds, and the Grant FPI-UPC provided by theUPC. It has been done under COST CA15104 IRACON EU project.Efficient computational resource management in 5G Cloud Radio Access Network (CRAN) environments is a challenging problem because it has to account simultaneously for throughput, latency, power efficiency, and optimization tradeoffs. This work proposes the use of a modified and improved version of the realistic Vienna Scenario that was defined in COST action IC1004, to test two different scale C-RAN deployments. First, a large-scale analysis with 628 Macro-cells (Mcells) and 221 Small-cells (Scells) is used to test different algorithms oriented to optimize the network deployment by minimizing delays, balancing the load among the Base Band Unit (BBU) pools, or clustering the Remote Radio Heads (RRH) efficiently to maximize the multiplexing gain. After planning, real-time resource allocation strategies with Quality of Service (QoS) constraints should be optimized as well. To do so, a realistic small-scale scenario for the metropolitan area is defined by modeling the individual time-variant traffic patterns of 7000 users (UEs) connected to different services. The distribution of resources among UEs and BBUs is optimized by algorithms, based on a realistic calculation of the UEs Signal to Interference and Noise Ratios (SINRs), that account for the required computational capacity per cell, the QoS constraints and the service priorities. However, the assumption of a fixed computational capacity at the BBU pools may result in underutilized or oversubscribed resources, thus affecting the overall QoS. As resources are virtualized at the BBU pools, they could be dynamically instantiated according to the required computational capacity (RCC). For this reason, a new strategy for Dynamic Resource Management with Adaptive Computational capacity (DRM-AC) using machine learning (ML) techniques is proposed. Three ML algorithms have been tested to select the best predicting approach: support vector machine (SVM), time-delay neural network (TDNN), and long short-term memory (LSTM). DRM-AC reduces the average of unused resources by 96 %, but there is still QoS degradation when RCC is higher than the predicted computational capacity (PCC). For this reason, two new strategies are proposed and tested: DRM-AC with pre-filtering (DRM-AC-PF) and DRM-AC with error shifting (DRM-AC-ES), reducing the average of unsatisfied resources by 99.9 % and 98 % compared to the DRM-AC, respectively
Statistical Multiplexing of Computations in C-RAN with Tradeoffs in Latency and Energy
In the Cloud Radio Access Network (C-RAN) architecture, the baseband signals
from multiple remote radio heads are processed in a centralized baseband unit
(BBU) pool. This architecture allows network operators to adapt the BBU's
computational resources to the aggregate access load experienced at the BBU,
which can change in every air-interface access frame. The degree of savings
that can be achieved by adapting the resources is a tradeoff between savings,
adaptation frequency, and increased queuing time. If the time scale for
adaptation of the resource multiplexing is greater than the access frame
duration, then this may result in additional access latency and limit the
energy savings. In this paper we investigate the tradeoff by considering two
extreme time-scales for the resource multiplexing: (i) long-term, where the
computational resources are adapted over periods much larger than the access
frame durations; (ii) short-term, where the adaption is below the access frame
duration. We develop a general C-RAN queuing model that describes the access
latency and show, for Poisson arrivals, that long-term multiplexing achieves
savings comparable to short-term multiplexing, while offering low
implementation complexity.Comment: Accepted for presentation at the 3rd International Workshop on 5G RAN
Design (ICC 2017
Cloud RAN for Mobile Networks - a Technology Overview
Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges the operators face while trying to support growing end-userâs needs. The main idea behind C-RAN is to pool the Baseband Units (BBUs) from multiple base stations into centralized BBU Pool for statistical multiplexing gain, while shifting the burden to the high-speed wireline transmission of In-phase and Quadrature (IQ) data. C-RAN enables energy efficient network operation and possible cost savings on base- band resources. Furthermore, it improves network capacity by performing load balancing and cooperative processing of signals originating from several base stations. This article surveys the state-of-the-art literature on C-RAN. It can serve as a starting point for anyone willing to understand C-RAN architecture and advance the research on C-RA
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
- âŚ