73 research outputs found

    Spatial Coordination Strategies in Future Ultra-Dense Wireless Networks

    Full text link
    Ultra network densification is considered a major trend in the evolution of cellular networks, due to its ability to bring the network closer to the user side and reuse resources to the maximum extent. In this paper we explore spatial resources coordination as a key empowering technology for next generation (5G) ultra-dense networks. We propose an optimization framework for flexibly associating system users with a densely deployed network of access nodes, opting for the exploitation of densification and the control of overhead signaling. Combined with spatial precoding processing strategies, we design network resources management strategies reflecting various features, namely local vs global channel state information knowledge exploitation, centralized vs distributed implementation, and non-cooperative vs joint multi-node data processing. We apply these strategies to future UDN setups, and explore the impact of critical network parameters, that is, the densification levels of users and access nodes as well as the power budget constraints, to users performance. We demonstrate that spatial resources coordination is a key factor for capitalizing on the gains of ultra dense network deployments.Comment: An extended version of a paper submitted to ISWCS'14, Special Session on Empowering Technologies of 5G Wireless Communication

    Wireless access network optimization for 5G

    Get PDF

    Scheduling M2M traffic over LTE uplink of a dense small cell network

    Get PDF
    We present an approach to schedule Long Term Evolution (LTE) uplink (UL) Machine-to-Machine (M2M) traffic in a densely deployed heterogeneous network, over the street lights of a big boulevard for smart city applications. The small cells operate with frequency reuse 1, and inter-cell interference (ICI) is a critical issue to manage. We consider a 3rd Generation Partnership Project (3GPP) compliant scenario, where single-carrier frequency-division multiple access (SC-FDMA) is selected as the multiple access scheme, which requires that all resource blocks (RBs) allocated to a single user have to be contiguous in the frequency within each time slot. This adjacency constraint limits the flexibility of the frequency-domain packet scheduling (FDPS) and inter-cell interference coordination (ICIC), when trying to maximize the scheduling objectives, and this makes the problem NP-hard. We aim to solve a multi-objective optimization problem, to maximize the overall throughput, maximize the radio resource usage and minimize the ICI. This can be modelled through a mixed-integer linear programming (MILP) and solved through a heuristic implementable in the standards. We propose two models. The first one allocates resources based on the three optimization criteria, while the second model is more compact and is demonstrated through numerical evaluation in CPLEX, to be equivalent in the complexity, while it performs better and executes faster. We present simulation results in a 3GPP compliant network simulator, implementing the overall protocol stack, which support the effectiveness of our algorithm, for different M2M applications, with respect to the state-of-the-art approaches

    Efficient energy management in ultra-dense wireless networks

    Get PDF
    The increase in demand for more network capacity has led to the evolution of wireless networks from being largely Heterogeneous (Het-Nets) to the now existing Ultra-dense (UDNs). In UDNs, small cells are densely deployed with the goal of shortening the physical distance between the base stations (BSs) and the UEs, so as to support more user equipment (UEs) at peak times while ensuring high data rates. Compared to Het-Nets, Ultra-dense networks (UDNs) have many advantages. These include, more network capacity, higher flexibility to routine configurations, and more suitability to achieve load-balancing, hence, fewer blind spots as well as lower call blocking probability. It should be noted that, in practice, due to the high density of deployed small cells in Ultra-Dense Networks, a number of issues, or rather concerns, come with this evolution from Het-Nets. Among these issues include problems with efficient radio resource management, user cell association, inter- and intra-cell interference management and, last but not least, efficient energy consumption. Some of these issues which impact the overall network efficiency are largely due to the use of obsolete algorithms, especially those whose resource allocation is based solely on received signal power (RSSP). In this paper, the focus is solely on the efficient energy management dilemma and how to optimally reduce the overall network energy consumption. Through an extensive literature review, a detailed report into the growing concern of efficient energy management in UDNs is provided in Chapter 2. The literature review report highlights the classification as well as the evolution of some of the Mobile Wireless Technologies and Mobile Wireless Networks in general. The literature review report provides reasons as to why the energy consumption issue has become a very serious concern in UltraDense networks as well as the various techniques and measures taken to mitigate this. It is shown that, due to the increasing Mobile Wireless Systems’ carbon footprint which carries serious negative environmental impact, and the general need to lower operating costs by the network operators, the management of energy consumption increases in priority. By using the architecture of a Fourth Generation Long Term Evolution (4G-LTE) UltraDense Network, the report further shows that more than 65% of the overall energy consumption is by the access network and base stations in particular. This phenomenon explains why most attention in energy efficiency management in UDNs is largely centred on reducing the energy consumption of the deployed base stations more than any other network components like the data servers or backhauling features used. Furthermore, the report also provides detailed information on the methods/techniques, their classification, implementation, as well as a critical analysis of the said implementations in literature. This study proposes a sub-optimal algorithm and Distributed Cell Resource Allocation with a Base Station On/Off scheme that aims at reducing the overall base station power consumption in UDNs, while ensuring that the overall Quality of Service (QoS) for each User Equipment (UE) as specified in its service class is met. The modeling of the system model used and hence formulation of the Network Energy Efficiency (NEE) optimization problem is done viii using stochastic geometry. The network model comprises both evolved Node B (eNB) type macro and small cells operating on different frequency bands as well as taking into account factors that impact NEE such as UE mobility, UE spatial distribution and small cells spatial distribution. The channel model takes into account signal interference from all base stations, path loss, fading, log normal shadowing, modulation and coding schemes used on each UE’s communication channels when computing throughout. The power consumption model used takes into account both static (site cooling, circuit power) and active (transmission or load based) base station power consumption. The formulation of the NEE optimization problem takes into consideration the user’s Quality-of-service (QoS), inter-cell interference, as well as each user’s spectral efficiency and coverage/success probability. The formulated NEE optimization problem is of type Nondeterministic Polynomial time (NP)-hard, due to the user-cell association. The proposed solution to the formulated optimization problem makes use of constraint relaxation to transform the NP-hard problem into a more solvable, convex and linear optimization one. This, combined with Lagrangian dual decomposition, is used to create a distributed solution. After cellassociation and resource allocation phases, the proposed solution in order to further reduce power consumption performs Cell On/Off. Then, by using the computer simulation tools/environments, the “Distributed Resource Allocation with Cell On/Off” scheme’s performance, in comparison to four other resource allocation schemes, is analysed and evaluated given a number of different network scenarios. Finally, the statistical and mathematical results generated through the simulations indicate that the proposed scheme is the closest in NEE performance to the Exhaustive Search algorithm, and hence superior to the other sub-optimal algorithms it is compared to

    Network Deployment for Maximal Energy Efficiency in Uplink with Multislope Path Loss

    Get PDF
    This work aims to design the uplink (UL) of a cellular network for maximal energy efficiency (EE). Each base station (BS) is randomly deployed within a given area and is equipped with MM antennas to serve KK user equipments (UEs). A multislope (distance-dependent) path loss model is considered and linear processing is used, under the assumption that channel state information is acquired by using pilot sequences (reused across the network). Within this setting, a lower bound on the UL spectral efficiency and a realistic circuit power consumption model are used to evaluate the network EE. Numerical results are first used to compute the optimal BS density and pilot reuse factor for a Massive MIMO network with three different detection schemes, namely, maximum ratio combining, zero-forcing (ZF) and multicell minimum mean-squared error. The numerical analysis shows that the EE is a unimodal function of BS density and achieves its maximum for a relatively small density of BS, irrespective of the employed detection scheme. This is in contrast to the single-slope (distance-independent) path loss model, for which the EE is a monotonic non-decreasing function of BS density. Then, we concentrate on ZF and use stochastic geometry to compute a new lower bound on the spectral efficiency, which is then used to optimize, for a given BS density, the pilot reuse factor, number of BS antennas and UEs. Closed- form expressions are computed from which valuable insights into the interplay between optimization variables, hardware characteristics, and propagation environment are obtained.Comment: 30 pages, 5 figures, 2 tables, https://ieeexplore.ieee.org/document/8362685

    Optimization Methods for Heterogeneous Wireless Communication Networks: Planning, Configuration and Operation

    Get PDF
    With the fourth generation of wireless radio communication networks reaching maturity, the upcoming fifth generation (5G) is a major subject of current research. 5G networks are designed to achieve a multitude of performance gains and the ability to provide services dedicated to various application scenarios. These applications include those that require increased network throughput, low latency, high reliability and support for a very high number of connected devices. Since the achieved throughput on a single point-to-point transmission is already close to the theoretical optimum, more efforts need to be invested to enable further performance gains in 5G. Technology candidates for future wireless networks include using very large antenna arrays with hundreds of antenna elements or expanding the bandwidth used for transmission to the millimeter-wave spectrum. Both these and other envisioned approaches require significant changes to the network architecture and a high economic commitment from the network operator. An already well established technology for expanding the throughput of a wireless communication network is a densification of the cellular layout. This is achieved by supplementing the existing, usually high-power, macro cells with a larger number of low-power small cells, resulting in a so-called heterogeneous network (HetNet). This approach builds upon the existing network infrastructure and has been shown to support the aforementioned technologies requiring more sophisticated hardware. Network densification using small cells can therefore be considered a suitable bridging technology to path the way for 5G and subsequent generations of mobile communication networks. The most significant challenge associated with HetNets is that the densification is only beneficial for the overall network performance up to a certain density, and can be harmful beyond that point. The network throughput is limited by the additional interferences caused by the close proximity of cells, and the economic operability of the network is limited by the vastly increased energy consumption and hardware cost associated with dense cell deployment. This dissertation addresses the challenge of enabling reliable performance gains through network densification while guaranteeing quality-of-service conditions and economic operability. The proposed approach is to address the underlying problem vertically over multiple layers, which differ in the time horizon on which network optimization measures are initiated, necessary information is gathered, and an optimized solutions are found. These time horizons are classified as network planning phase, network configuration phase, and network operation phase. Optimization schemes are developed for optimizing the resource- and energy consumption that operate mostly in the network configuration phase. Since these approaches require a load-balanced network, schemes to achieve and maintain load balancing between cells are introduced for the network planning phase and operation phase, respectively. For the network planning phase, an approach is proposed for optimizing the locations of additional small cells in an existing wireless network architecture, and to schedule their activity phases in advance according to data demand forecasts. Optimizing the locations of multiple cells jointly is shown to be superior to deploying them one-by-one based on greedy heuristic approaches. Furthermore, the cell activity scheduling obtains the highest load balancing performance if the time-schedule and the durations of activity periods is jointly optimized, which is an approach originating from process engineering. Simulation results show that the load levels of overloaded cells can be effectively decreased in the network planning phase by choosing optimized deployment locations and cell activity periods. Operating the network with a high resource efficiency while ensuring quality-of-service constraints is addressed using resource optimization in the network configuration phase. An optimization problem to minimize the resource consumption of the network by operating multiple separated resource slices is designed. The originally problem, which is computationally intractable for large networks, is reformulated with a linear inner approximation, that is shown to achieve close to optimal performance. The interference is approximated with a dynamic model that achieves a closer approximation of the actual cell load than the static worst-case model established in comparable state-ot-the art approaches. In order to mitigate the increase in energy consumption associated with the increase in cell density, an energy minimization problem is proposed that jointly optimizes the transmit power and activity status of all cells in the network. An original problem formulation is designed and an inner approximation with better computational tractability is proposed. Energy consumption levels of a HetNet are simulated for multiple energy minimization approaches. The proposed method achieves lower energy consumption levels than approaches based on an exhaustive search over all cell activity configurations or heuristic power scaling. Additionally, in simulations, the likelihood of finding an energy minimized solution that satisfies quality-of-service constraints is shown to be significantly higher for the proposed approach. Finally, the problem of maintaining load balancing while the network is in operation is addressed with a decentralized scheme based on a learning system using multi-class support vector machines. Established methods often require significant information exchange between network entities and a centralized optimization of the network to achieve load balancing. In this dissertation, a decentralized learning system is proposed that globally balance the load levels close to the optimal solution while only requiring limited local information exchange

    Energy-Efficient Resource Allocation in Cloud and Fog Radio Access Networks

    Get PDF
    PhD ThesisWith the development of cloud computing, radio access networks (RAN) is migrating to fully or partially centralised architecture, such as Cloud RAN (C- RAN) or Fog RAN (F-RAN). The novel architectures are able to support new applications with the higher throughput, the higher energy e ciency and the better spectral e ciency performance. However, the more complex energy consumption features brought by these new architectures are challenging. In addition, the usage of Energy Harvesting (EH) technology and the computation o oading in novel architectures requires novel resource allocation designs.This thesis focuses on the energy e cient resource allocation for Cloud and Fog RAN networks. Firstly, a joint user association (UA) and power allocation scheme is proposed for the Heterogeneous Cloud Radio Access Networks with hybrid energy sources where Energy Harvesting technology is utilised. The optimisation problem is designed to maximise the utilisation of the renewable energy source. Through solving the proposed optimisation problem, the user association and power allocation policies are derived together to minimise the grid power consumption. Compared to the conventional UAs adopted in RANs, green power harvested by renewable energy source can be better utilised so that the grid power consumption can be greatly reduced with the proposed scheme. Secondly, a delay-aware energy e cient computation o oading scheme is proposed for the EH enabled F-RANs, where for access points (F-APs) are supported by renewable energy sources. The uneven distribution of the harvested energy brings in dynamics of the o oading design and a ects the delay experienced by users. The grid power minimisation problem is formulated. Based on the solutions derived, an energy e cient o oading decision algorithm is designed. Compared to SINR-based o oading scheme, the total grid power consumption of all F-APs can be reduced signi cantly with the proposed o oading decision algorithm while meeting the latency constraint. Thirdly, an energy-e cient computation o oading for mobile applications with shared data is investigated in a multi-user fog computing network. Taking the advantage of shared data property of latency-critical applications such as virtual reality (VR) and augmented reality (AR) into consideration, the energy minimisation problem is formulated. Then the optimal computation o oading and communications resources allocation policy is proposed which is able to minimise the overall energy consumption of mobile users and cloudlet server. Performance analysis indicates that the proposed policy outperforms other o oading schemes in terms of energy e ciency. The research works conducted in this thesis and the thorough performance analysis have revealed some insights on energy e cient resource allocation design in Cloud and Fog RANs

    Airborne Integrated Access and Backhaul Systems : Learning-Aided Modeling and Optimization

    Get PDF
    The deployment of millimeter-wave (mmWave) 5G New Radio (NR) networks is hampered by the properties of the mmWave band, such as severe signal attenuation and dynamic link blockage, which together limit the cell range. To provide a cost-efficient and flexible solution for network densification, 3GPP has recently proposed integrated access and backhaul (IAB) technology. As an alternative approach to terrestrial deployments, the utilization of unmanned aerial vehicles (UAVs) as IAB-nodes may provide additional flexibility for topology configuration. The aims of this study are to (i) propose efficient optimization methods for airborne and conventional IAB systems and (ii) numerically quantify and compare their optimized performance. First, by assuming fixed locations of IAB-nodes, we formulate and solve the joint path selection and resource allocation problem as a network flow problem. Then, to better benefit from the utilization of UAVs, we relax this constraint for the airborne IAB system. To efficiently optimize the performance for this case, we propose to leverage deep reinforcement learning (DRL) method for specifying airborne IAB-node locations. Our numerical results show that the capacity gains of airborne IAB systems are notable even in non-optimized conditions but can be improved by up to 30 % under joint path selection and resource allocation and, even further, when considering aerial IAB-node locations as an additional optimization criterion.acceptedVersionPeer reviewe

    Resource Allocation for Multiple-Input and Multiple-Output Interference Networks

    Get PDF
    To meet the exponentially increasing traffic data driven by the rapidly growing mobile subscriptions, both industry and academia are exploring the potential of a new genera- tion (5G) of wireless technologies. An important 5G goal is to achieve high data rate. Small cells with spectrum sharing and multiple-input multiple-output (MIMO) techniques are one of the most promising 5G technologies, since it enables to increase the aggregate data rate by improving the spectral efficiency, nodes density and transmission bandwidth, respectively. However, the increased interference in the densified networks will in return limit the achievable rate performance if not properly managed. The considered setup can be modeled as MIMO interference networks, which can be classified into the K-user MIMO interference channel (IC) and the K-cell MIMO interfering broadcast channel/multiple access channel (MIMO-IBC/IMAC) according to the number of mobile stations (MSs) simultaneously served by each base station (BS). The thesis considers two physical layer (PHY) resource allocation problems that deal with the interference for both models: 1) Pareto boundary computation for the achiev- able rate region in a K-user single-stream MIMO IC and 2) grouping-based interference alignment (GIA) with optimized IA-Cell assignment in a MIMO-IMAC under limited feedback. In each problem, the thesis seeks to provide a deeper understanding of the system and novel mathematical results, along with supporting numerical examples. Some of the main contributions can be summarized as follows. It is an open problem to compute the Pareto boundary of the achievable rate region for a K-user single-stream MIMO IC. The K-user single-stream MIMO IC models multiple transmitter-receiver pairs which operate over the same spectrum simultaneously. Each transmitter and each receiver is equipped with multiple antennas, and a single desired data stream is communicated in each transmitter-receiver link. The individual achievable rates of the K users form a K-dimensional achievable rate region. To find efficient operating points in the achievable rate region, the Pareto boundary computation problem, which can be formulated as a multi-objective optimization problem, needs to be solved. The thesis transforms the multi-objective optimization problem to two single-objective optimization problems–single constraint rate maximization problem and alternating rate profile optimization problem, based on the formulations of the ε-constraint optimization and the weighted Chebyshev optimization, respectively. The thesis proposes two alternating optimization algorithms to solve both single-objective optimization problems. The convergence of both algorithms is guaranteed. Also, a heuristic initialization scheme is provided for each algorithm to achieve a high-quality solution. By varying the weights in each single-objective optimization problem, numerical results show that both algorithms provide an inner bound very close to the Pareto boundary. Furthermore, the thesis also computes some key points exactly on the Pareto boundary in closed-form. A framework for interference alignment (IA) under limited feedback is proposed for a MIMO-IMAC. The MIMO-IMAC well matches the uplink scenario in cellular system, where multiple cells share their spectrum and operate simultaneously. In each cell, a BS receives the desired signals from multiple MSs within its own cell and each BS and each MS is equipped with multi-antenna. By allowing the inter-cell coordination, the thesis develops a distributed IA framework under limited feedback from three aspects: the GIA, the IA-Cell assignment and dynamic feedback bit allocation (DBA), respec- tively. Firstly, the thesis provides a complete study along with some new improvements of the GIA, which enables to compute the exact IA precoders in closed-form, based on local channel state information at the receiver (CSIR). Secondly, the concept of IA-Cell assignment is introduced and its effect on the achievable rate and degrees of freedom (DoF) performance is analyzed. Two distributed matching approaches and one centralized assignment approach are proposed to find a good IA-Cell assignment in three scenrios with different backhaul overhead. Thirdly, under limited feedback, the thesis derives an upper bound of the residual interference to noise ratio (RINR), formulates and solves a corresponding DBA problem. Finally, numerical results show that the proposed GIA with optimized IA-Cell assignment and the DBA greatly outperforms the traditional GIA algorithm
    • …
    corecore