10 research outputs found

    A TLBO-BASED ENERGY EFFICIENT BASE STATION SWITCH OFF AND USER SUBCARRIER ALLOCATION ALGORITHM FOR OFDMA CELLULAR NETWORKS

    Get PDF
    Downlink of a cellular network with orthogonal frequency-division multiple access (OFDMA) is considered. Joint base station switch OFF and user subcarrier-allocation with guaranteed user quality of service, is shown to be a promising approach for reducing network’s total power consumption. However, solving the aforementioned mix-integer and nonlinear optimization problem requires robust and powerful optimization techniques. In this paper, teaching-learning based optimization algorithm has been adopted to lower cellular network’s total power consumption. The results show that the proposed technique is able to reduce network’s total power consumption by determining a near optimum set of base stations to be switched OFF and near optimum subcarrier-user assignments. It is shown that the proposed scheme is superior to existing base station switch OFF schemes. Robustness of the proposed TLBO-based technique is verified

    Energy-efficient cooperative resource allocation for OFDMA

    Get PDF
    Energy is increasingly becoming an exclusive commodity in next generation wireless communication systems, where even in legacy systems, the mobile operators operational expenditure is largely attributed to the energy bill. However, as the amount of mobile traffic is expected to double over the next decade as we enter the Next Generation communications era, the need to address energy efficient protocols will be a priority. Therefore, we will need to revisit the design of the mobile network in order to adopt a proactive stance towards reducing the energy consumption of the network. Future emerging communication paradigms will evolve towards Next Generation mobile networks, that will not only consider a new air interface for high broadband connectivity, but will also integrate legacy communications (LTE/LTE-A, IEEE 802.11x, among others) networks to provide a ubiquitous communication platform, and one that can host a multitude of rich services and applications. In this context, one can say that the radio access network will predominantly be OFDMA based, providing the impetus for further research studies on how this technology can be further optimized towards energy efficiency. In fact, advanced approaches towards both energy and spectral efficient design will still dominate the research agenda. Taking a step towards this direction, LTE/LTE-A (Long Term Evolution-Advanced) have already investigated cooperative paradigms such as SON (self-Organizing Networks), Network Sharing, and CoMP (Coordinated Multipoint) transmission. Although these technologies have provided promising results, some are still in their infancy and lack an interdisciplinary design approach limiting their potential gain. In this thesis, we aim to advance these future emerging paradigms from a resource allocation perspective on two accounts. In the first scenario, we address the challenge of load balancing (LB) in OFDMA networks, that is employed to redistribute the traffic load in the network to effectively use spectral resources throughout the day. We aim to reengineer the load-balancing (LB) approach through interdisciplinary design to develop an integrated energy efficient solution based on SON and network sharing, what we refer to as SO-LB (Self-Organizing Load balancing). Obtained simulation results show that by employing SO-LB algorithm in a shared network, it is possible to achieve up to 15-20% savings in energy consumption when compared to LTE-A non-shared networks. The second approach considers CoMP transmission, that is currently used to enhance cell coverage and capacity at cell edge. Legacy approaches mainly consider fundamental scheduling policies towards assigning users for CoMP transmission. We build on these scheduling approaches towards a cross-layer design that provide enhanced resource utilization, fairness, and energy saving whilst maintaining low complexity, in particular for broadband applications

    Distributed radio resource management in LTE-advanced networks with type 1 relay

    Get PDF
    Long Term Evolution (LTE)-Advanced is proposed as a candidate of the 4th generation (4G) mobile telecommunication systems. As an evolved version of LTE, LTE- Advanced is also based on Orthogonal Frequency Division Multiplexing (OFDM) and in addition, it adopts some emerging technologies, such as relaying. Type I relay nodes, de_ned in LTE-Advanced standards, can control their cells with their own reference signals and have Radio Resource Management (RRM) functionalities. The rationale of RRM is to decide which resources are allocated to which users for optimising performance metrics, such as throughput, fairness, power consumption and Quality of Service (QoS). The RRM techniques in LTE-Advanced networks, including route selection, resource partitioning and resource scheduling, are facing new challenges brought by Type 1 relay nodes and increasingly becoming research focuses in recent years. The research work presented in this thesis has made the following contributions. A service-aware adaptive bidirectional optimisation route selection strategy is proposed to consider both uplink optimisation and downlink optimisation according to service type. The load between di_erent serving nodes, including eNBs and relay nodes, are rebalanced under the _xed resource partitioning. The simulation results show that larger uplink throughputs and bidirectional throughputs can be achieved, compared with existing route selection strategies. A distributed two-hop proportional fair resource allocation scheme is proposed in order to provide better two-hop end-to-end proportional fairness for all the User Equipments (UEs), especially for the relay UEs. The resource partitioning is based on the cases of none Frequency Reuse (FR) pattern, full FR pattern and partial FR patterns. The resource scheduling in access links and backhaul links are considered jointly. A proportional fair joint route selection and resource partitioning algorithm isproposed to obtain an improved solution to the two-hop Adaptive Partial Frequency Reusing (APFR) problem with one relay node per cell. In addition, two special situations of APFR, full FR and no FR, are utilised to narrow the iterative search range of the proposed algorithm and reduce its complexity

    Practical interference mitigation for Wi-Fi systems

    Get PDF
    Wi-Fi's popularity is also its Achilles' heel since in the dense deployments of multiple Wi-Fi networks typical in urban environments, concurrent transmissions interfere. The advent of networked devices with multiple antennas allows new ways to improve Wi-Fi's performance: a host can align the phases of the signals either received at or transmitted from its antennas so as to either maximize the power of the signal of interest through beamforming or minimize the power of interference through nulling. Theory predicts that these techniques should enable concurrent transmissions by proximal sender-receiver pairs, thus improving capacity. Yet practical challenges remain. Hardware platform limitations can prevent precise measurement of the wireless channel, or limit the accuracy of beamforming and nulling. The interaction between nulling and Wi-Fi's OFDM modulation, which transmits tranches of a packet's bits on distinct subcarriers, is subtle and can sacrifice the capacity gain expected from nulling. And in deployments where Wi-Fi networks are independently administered, APs must efficiently share channel measurements and coordinate their transmissions to null effectively. In this thesis, I design and experimentally evaluate beamforming and nulling techniques for use in Wi-Fi networks that address the aforementioned practical challenges. My contributions include: - Cone of Silence (CoS): a system that allows a Wi-Fi AP equipped with a phased-array antenna but only a single 802.11g radio to mitigate interference from senders other than its intended one, thus boosting throughput; - Cooperative Power Allocation (COPA): a system that efficiently shares channel measurements and coordinates transmissions between independent APs, and cooperatively allocates power so as to render received power across OFDM subcarriers flat at each AP's receiver, thus boosting throughput; - Power Allocation for Distributed MIMO (PADM): a system that leverages intelligent power allocation to mitigate inter-stream interference in distributed MIMO wireless networks, thus boosting throughput

    Device-to-Device Communication and Multihop Transmission for Future Cellular Networks

    Get PDF
    The next generation wireless networks i.e. 5G aim to provide multi-Gbps data traffic, in order to satisfy the increasing demand for high-definition video, among other high data rate services, as well as the exponential growth in mobile subscribers. To achieve this dramatic increase in data rates, current research is focused on improving the capacity of current 4G network standards, based on Long Term Evolution (LTE), before radical changes are exploited which could include acquiring additional/new spectrum. The LTE network has a reuse factor of one; hence neighbouring cells/sectors use the same spectrum, therefore making the cell edge users vulnerable to inter-cell interference. In addition, wireless transmission is commonly hindered by fading and pathloss. In this direction, this thesis focuses on improving the performance of cell edge users in LTE and LTE-Advanced (LTE-A) networks by initially implementing a new Coordinated Multi-Point (CoMP) algorithm to mitigate cell edge user interference. Subsequently Device-to-Device (D2D) communication is investigated as the enabling technology for maximising Resource Block (RB) utilisation in current 4G and emerging 5G networks. It is demonstrated that the application, as an extension to the above, of novel power control algorithms, to reduce the required D2D TX power, and multihop transmission for relaying D2D traffic, can further enhance network performance. To be able to develop the aforementioned technologies and evaluate the performance of new algorithms in emerging network scenarios, a beyond-the-state-of-the-art LTE system-level simulator (SLS) was implemented. The new simulator includes Multiple-Input Multiple-Output (MIMO) antenna functionalities, comprehensive channel models (such as Wireless World initiative New Radio II i.e. WINNER II) and adaptive modulation and coding schemes to accurately emulate the LTE and LTE-A network standards. Additionally, a novel interference modelling scheme using the ‘wrap around’ technique was proposed and implemented that maintained the topology of flat surfaced maps, allowing for use with cell planning tools while obtaining accurate and timely results in the SLS compared to the few existing platforms. For the proposed CoMP algorithm, the adaptive beamforming technique was employed to reduce interference on the cell edge UEs by applying Coordinated Scheduling (CoSH) between cooperating cells. Simulation results show up to 2-fold improvement in terms of throughput, and also shows SINR gain for the cell edge UEs in the cooperating cells. Furthermore, D2D communication underlaying the LTE network (and future generation of wireless networks) was investigated. The technology exploits the proximity of users in a network to achieve higher data rates with maximum RB utilisation (as the technology reuses the cellular RB simultaneously), while taking some load off the Evolved Node B (eNB) i.e. by direct communication between User Equipment (UE). Simulation results show that the proximity and transmission power of D2D transmission yields high performance gains for a D2D receiver, which was demonstrated to be better than that of cellular UEs with better channel conditions or in close proximity to the eNB in the network. The impact of interference from the simultaneous transmission however impedes the achievable data rates of cellular UEs in the network, especially at the cell edge. Thus, a power control algorithm was proposed to mitigate the impact of interference in the hybrid network (network consisting of both cellular and D2D UEs). It was implemented by setting a minimum SINR threshold so that the cellular UEs achieve a minimum performance, and equally a maximum SINR threshold to establish fairness for the D2D transmission as well. Simulation results show an increase in the cell edge throughput and notable improvement in the overall SINR distribution of UEs in the hybrid network. Additionally, multihop transmission for D2D UEs was investigated in the hybrid network: traditionally, the scheme is implemented to relay cellular traffic in a homogenous network. Contrary to most current studies where D2D UEs are employed to relay cellular traffic, the use of idle nodes to relay D2D traffic was implemented uniquely in this thesis. Simulation results show improvement in D2D receiver throughput with multihop transmission, which was significantly better than that of the same UEs performance with equivalent distance between the D2D pair when using single hop transmission

    Self-organization for 5G and beyond mobile networks using reinforcement learning

    Get PDF
    The next generations of mobile networks 5G and beyond, must overcome current networks limitations as well as improve network performance. Some of the requirements envisioned for future mobile networks are: addressing the massive growth required in coverage, capacity and traffic; providing better quality of service and experience to end users; supporting ultra high data rates and reliability; ensuring latency as low as one millisecond, among others. Thus, in order for future networks to enable all of these stringent requirements, a promising concept has emerged, self organising networks (SONs). SONs consist of making mobile networks more adaptive and autonomous and are divided in three main branches, depending on their use-cases, namely: self-configuration, self-optimisation, and self-healing. SON is a very promising and broad concept, and in order to enable it, more intelligence needs to be embedded in the mobile network. As such, one possible solution is the utilisation of machine learning (ML) algorithms. ML has many branches, such as supervised, unsupervised and Reinforcement Learning (RL), and all can be used in different SON use-cases. The objectives of this thesis are to explore different RL techniques in the context of SONs, more specifically in self-optimization use-cases. First, the use-case of user-cell association in future heterogeneous networks is analysed and optimised. This scenario considers not only Radio Access Network (RAN) constraints, but also in terms of the backhaul. Based on this, a distributed solution utilizing RL is proposed and compared with other state-of-the-art methods. Results show that the proposed RL algorithm outperforms current ones and is able to achieve better user satisfaction, while minimizing the number of users in outage. Another objective of this thesis is the evaluation of Unmanned Aerial vehicles (UAVs) to optimize cellular networks. It is envisioned that UAVs can be utilized in different SON use-cases and integrated with RL algorithms to determine their optimal 3D positions in space according to network constraints. As such, two different mobile network scenarios are analysed, one emergency and a pop-up network. The emergency scenario considers that a major natural disaster destroyed most of the ground network infrastructure and the goal is to provide coverage to the highest number of users possible using UAVs as access points. The second scenario simulates an event happening in a city and, because of the ground network congestion, network capacity needs to be enhanced by the deployment of aerial base stations. For both scenarios different types of RL algorithms are considered and their complexity and convergence are analysed. In both cases it is shown that UAVs coupled with RL are capable of solving network issues in an efficient and quick manner. Thus, due to its ability to learn from interaction with an environment and from previous experience, without knowing the dynamics of the environment, or relying on previously collected data, RL is considered as a promising solution to enable SON

    Cooperative Uplink Inter-Cell Interference (ICI) Mitigation in 5G Networks

    Get PDF
    In order to support the new paradigm shift in fifth generation (5G) mobile communication, radically different network architectures, associated technologies and network operation algorithms, need to be developed compared to existing fourth generation (4G) cellular solutions. The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously. To realise the dramatic increase in data rates in particular, research is focused on improving the capacity of current, Long Term Evolution (LTE)-based, 4G network standards, before radical changes are exploited which could include acquiring additional spectrum. The LTE network has a reuse factor of one; hence neighbouring cells/sectors use the same spectrum, therefore making the cell-edge users vulnerable to heavy inter cell interference in addition to the other factors such as fading and path-loss. In this direction, this thesis focuses on improving the performance of cell-edge users in LTE and LTE-Advanced networks by initially implementing a new Coordinated Multi-Point (CoMP) technique to support future 5G networks using smart antennas to mitigate cell-edge user interference in uplink. Successively a novel cooperative uplink inter-cell interference mitigation algorithm based on joint reception at the base station using receiver adaptive beamforming is investigated. Subsequently interference mitigation in a heterogeneous environment for inter Device-to-Device (D2D) communication underlaying cellular network is investigated as the enabling technology for maximising resource block (RB) utilisation in emerging 5G networks. The proximity of users in a network, achieving higher data rates with maximum RB utilisation (as the technology reuses the cellular RB simultaneously), while taking some load off the evolved Node B (eNodeB) i.e. by direct communication between User Equipment (UE), has been explored. Simulation results show that the proximity and transmission power of D2D transmission yields high performance gains for D2D receivers, which was demonstrated to be better than that of cellular UEs with better channel conditions or in close proximity to the eNodeB in the network. It is finally demonstrated that the application, as an extension to the above, of a novel receiver beamforming technique to reduce interference from D2D users, can further enhance network performance. To be able to develop the aforementioned technologies and evaluate the performance of new algorithms in emerging network scenarios, a beyond the-state-of-the-art LTE system-level-simulator (SLS) was implemented. The new simulator includes Multiple-Input Multiple-Output (MIMO) antenna functionalities, comprehensive channel models (such as Wireless World initiative New Radio II i.e. WINNER II) and adaptive modulation and coding schemes to accurately emulate the LTE and LTE-A network standards
    corecore