116 research outputs found

    Analysis of back propagation and radial basis function neural networks for handover decisions in wireless communication

    Get PDF
    In mobile systems, handoff is a vital process, referring to a process of allocating an ongoing call from one BS to another BS. The handover technique is very important to maintain the Quality of service. Handover algorithms, based on neural networks, fuzzy logic etc. can be used for the same purpose to keep Quality of service as high as possible. In this paper, it is proposed that back propagation networks and radial basis functions may be used for taking handover decision in wireless communication networks. The performance of these classifiers is evaluated on the basis of neurons in hidden layer, training time and classification accuracy. The proposed approach shows that radial basis function neural network give better results for making handover decisions in wireless heterogeneous networks with classification accuracy of 90%

    Comparison of vertical handover decision-based techniques in heterogeneous networks

    Get PDF
    Industry leaders are currently setting out standards for 5G Networks projected for 2020 or even sooner. Future generation networks will be heterogeneous in nature because no single network type is capable of optimally meeting all the rapid changes in customer demands. Heterogeneous networks are typically characterized by some network architecture, base stations of varying transmission power, transmission solutions and the deployment of a mix of technologies (multiple radio access technologies). In heterogeneous networks, the processes involved when a mobile node successfully switches from one radio access technology to the other for the purpose of quality of service continuity is termed vertical handover or vertical handoff. Active calls that get dropped, or cases where there is discontinuity of service experienced by mobile users can be attributed to the phenomenon of delayed handover or an outright case of an unsuccessful handover procedure. This dissertation analyses the performance of a fuzzy-based VHO algorithm scheme in a Wi-Fi, WiMAX, UMTS and LTE integrated network using the OMNeT++ discrete event simulator. The loose coupling type network architecture is adopted and results of the simulation are analysed and compared for the two major categories of handover basis; multiple and single criteria based handover methods. The key performance indices from the simulations showed better overall throughput, better call dropped rate and shorter handover time duration for the multiple criteria based decision method compared to the single criteria based technique. This work also touches on current trends, challenges in area of seamless handover and initiatives for future Networks (Next Generation Heterogeneous Networks)

    An intelligent call admission control algorithm for load balancing in 5G-satellite networks

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Cellular networks are projected to deal with an immense rise in data traffic, as well as an enormous and diverse device, plus advanced use cases, in the nearest future; hence, future 5G networks are being developed to consist of not only 5G but also different RATs integrated. In addition to 5G, the user’s device (UD) will be able to connect to the network via LTE, WiMAX, Wi-Fi, Satellite, and other technologies. On the other hand, Satellite has been suggested as a preferred network to support 5G use cases. Satellite networks are among the most sophisticated communication technologies which offer specific benefits in geographically dispersed and dynamic networks. Utilising their inherent advantages in broadcasting capabilities, global coverage, decreased dependency on terrestrial infrastructure, and high security, they offer highly efficient, effective, and rapid network deployments. Satellites are more suited for large-scale communications than terrestrial communication networks. Due to their extensive service coverage and strong multilink transmission capabilities, satellites offer global high-speed connectivity and adaptable access systems. The convergence of 5G technology and satellite networks therefore marks a significant milestone in the evolution of global connectivity. However, this integration introduces a complex problem related to resource management, particularly in Satellite – Terrestrial Integrated Networks (STINs). The key issue at hand is the efficient allocation of resources in STINs to enhance Quality of Service (QoS) for users. The root cause of this issue originates from a vast quantity of users sharing these resources, the dynamic nature of generated traffic, the scarcity of wireless spectrum resources, and the random allocation of wireless channels. Hence, resource allocation is critical to ensure user satisfaction, fair traffic distribution, maximised throughput, and minimised congestion. Achieving load balancing is essential to guarantee an equal amount of traffic distributed between different RATs in a heterogeneous wireless network; this would enable optimal utilisation of the radio resources and lower the likelihood of call blocking/dropping. This research endeavours to address this challenge through the development and evaluation of an intelligent call admission control (CAC) algorithm based on Enhanced Particle Swarm Optimization (EPSO). The primary aim of this research is to design an EPSO-based CAC algorithm tailored specifically for 5G-satellite heterogeneous wireless networks. The algorithm's objectives include maximising the number of admitted calls while maintaining Quality of Service (QoS) for existing users, improving network resource utilization, reducing congestion, ensuring fairness, and enhancing user satisfaction. To achieve these objectives, a detailed research methodology is outlined, encompassing algorithm development, numerical simulations, and comparative analysis. The proposed EPSO algorithm is benchmarked against alternative artificial intelligence and machine learning algorithms, including the Artificial Bee Colony algorithm, Simulated Annealing algorithm, and Q-Learning algorithm. Performance metrics such as throughput, call blocking rates, and fairness are employed to evaluate the algorithms' efficacy in achieving load-balancing objectives. The experimental findings yield insights into the performance of the EPSO-based CAC algorithm and its comparative advantages over alternative techniques. Through rigorous analysis, this research elucidates the EPSO algorithm's strengths in dynamically adapting to changing network conditions, optimising resource allocation, and ensuring equitable distribution of traffic among different RATs. The result shows the EPSO algorithm outperforms the other 3 algorithms in all the scenarios. The contributions of this thesis extend beyond academic research, with potential societal implications including enhanced connectivity, efficiency, and user experiences in 5G-Satellite heterogeneous wireless networks. By advancing intelligent resource management techniques, this research paves the way for improved network performance and reliability in the evolving landscape of wireless communication

    Optimization of Mobility Parameters using Fuzzy Logic and Reinforcement Learning in Self-Organizing Networks

    Get PDF
    In this thesis, several optimization techniques for next-generation wireless networks are proposed to solve different problems in the field of Self-Organizing Networks and heterogeneous networks. The common basis of these problems is that network parameters are automatically tuned to deal with the specific problem. As the set of network parameters is extremely large, this work mainly focuses on parameters involved in mobility management. In addition, the proposed self-tuning schemes are based on Fuzzy Logic Controllers (FLC), whose potential lies in the capability to express the knowledge in a similar way to the human perception and reasoning. In addition, in those cases in which a mathematical approach has been required to optimize the behavior of the FLC, the selected solution has been Reinforcement Learning, since this methodology is especially appropriate for learning from interaction, which becomes essential in complex systems such as wireless networks. Taking this into account, firstly, a new Mobility Load Balancing (MLB) scheme is proposed to solve persistent congestion problems in next-generation wireless networks, in particular, due to an uneven spatial traffic distribution, which typically leads to an inefficient usage of resources. A key feature of the proposed algorithm is that not only the parameters are optimized, but also the parameter tuning strategy. Secondly, a novel MLB algorithm for enterprise femtocells scenarios is proposed. Such scenarios are characterized by the lack of a thorough deployment of these low-cost nodes, meaning that a more efficient use of radio resources can be achieved by applying effective MLB schemes. As in the previous problem, the optimization of the self-tuning process is also studied in this case. Thirdly, a new self-tuning algorithm for Mobility Robustness Optimization (MRO) is proposed. This study includes the impact of context factors such as the system load and user speed, as well as a proposal for coordination between the designed MLB and MRO functions. Fourthly, a novel self-tuning algorithm for Traffic Steering (TS) in heterogeneous networks is proposed. The main features of the proposed algorithm are the flexibility to support different operator policies and the adaptation capability to network variations. Finally, with the aim of validating the proposed techniques, a dynamic system-level simulator for Long-Term Evolution (LTE) networks has been designed

    Learning and Reasoning Strategies for User Association in Ultra-dense Small Cell Vehicular Networks

    Get PDF
    Recent vehicular ad hoc networks research has been focusing on providing intelligent transportation services by employing information and communication technologies on road transport. It has been understood that advanced demands such as reliable connectivity, high user throughput, and ultra-low latency required by these services cannot be met using traditional communication technologies. Consequently, this thesis reports on the application of artificial intelligence to user association as a technology enabler in ultra-dense small cell vehicular networks. In particular, the work focuses on mitigating mobility-related concerns and networking issues at different mobility levels by employing diverse heuristic as well as reinforcement learning (RL) methods. Firstly, driven by rapid fluctuations in the network topology and the radio environment, a conventional, three-step sequence user association policy is designed to highlight and explore the impact of vehicle speed and different performance indicators on network quality of service (QoS) and user experience. Secondly, inspired by control-theoretic models and dynamic programming, a real-time controlled feedback user association approach is proposed. The algorithm adapts to the changing vehicular environment by employing derived network performance information as a heuristic, resulting in improved network performance. Thirdly, a sequence of novel RL based user association algorithms are developed that employ variable learning rate, variable rewards function and adaptation of the control feedback framework to improve the initial and steady-state learning performance. Furthermore, to accelerate the learning process and enhance the adaptability and robustness of the developed RL algorithms, heuristically accelerated RL and case-based transfer learning methods are employed. A comprehensive, two-tier, event-based, system level simulator which is an integration of a dynamic vehicular network, a highway, and an ultra-dense small cell network is developed. The model has enabled the analysis of user mobility effects on the network performance across different mobility levels as well as served as a firm foundation for the evaluation of the empirical properties of the investigated approaches

    Efficient Discovery and Utilization of Radio Information in Ultra-Dense Heterogeneous 3D Wireless Networks

    Get PDF
    Emergence of new applications, industrial automation and the explosive boost of smart concepts have led to an environment with rapidly increasing device densification and service diversification. This revolutionary upward trend has led the upcoming 6th-Generation (6G) and beyond communication systems to be globally available communication, computing and intelligent systems seamlessly connecting devices, services and infrastructure facilities. In this kind of environment, scarcity of radio resources would be upshot to an unimaginably high level compelling them to be very efficiently utilized. In this case, timely action is taken to deviate from approximate site-specific 2-Dimensional (2D) network concepts in radio resource utilization and network planning replacing them with more accurate 3-Dimensional (3D) network concepts while utilizing spatially distributed location-specific radio characteristics. Empowering this initiative, initially a framework is developed to accurately estimate the location-specific path loss parameters under dynamic environmental conditions in a 3D small cell (SC) heterogeneous networks (HetNets) facilitating efficient radio resource management schemes using crowdsensing data collection principle together with Linear Algebra (LA) and machine learning (ML) techniques. According to the results, the gradient descent technique is with the highest path loss parameter estimation accuracy which is over 98%. At a latter stage, receive signal power is calculated at a slightly extended 3D communication distances from the cluster boundaries based on already estimated propagation parameters with an accuracy of over 74% for certain distances. Coordination in both device-network and network-network interactions is also a critical factor in efficient radio resource utilization while meeting Quality of Service (QoS) requirements in heavily congested future 3D SCs HetNets. Then, overall communication performance enhancement through better utilization of spatially distributed opportunistic radio resources in a 3D SC is addressed with the device and network coordination, ML and Slotted-ALOHA principles together with scheduling, power control and access prioritization schemes. Within this solution, several communication related factors like 3D spatial positions and QoS requirements of the devices in two co-located networks operated in licensed band (LB) and unlicensed band (UB) are considered. To overcome the challenge of maintaining QoS under ongoing network densification and with limited radio resources cellular network traffic is offloaded to UB. Approximately, 70% better overall coordination efficiency is achieved at initial network access with the device network coordinated weighting factor based prioritization scheme powered with the Q-learning (QL) principle over conventional schemes. Subsequently, coverage information of nearby dense NR-Unlicensed (NR-U) base stations (BSs) is investigated for better allocation and utilization of common location-specific spatially distributed radio resources in UB. Firstly, the problem of determining the receive signal power at a given location due to a transmission done by a neighbor NR-U BS is addressed with a solution based on a deep regression neural network algorithm enabling to predict receive signal or interference power of a neighbor BS at a given location of a 3D cell. Subsequently, the problem of efficient radio resource management is considered while dynamically utilizing UB spectrum for NR-U transmissions through an algorithm based on the double Q-learning (DQL) principle and device collaboration. Over 200% faster algorithm convergence is achieved by the DQL based method over conventional solutions with estimated path loss parameters

    A comprehensive survey on radio resource management in 5G HetNets: current solutions, future trends and open issues

    Get PDF
    The 5G network technologies are intended to accommodate innovative services with a large influx of data traffic with lower energy consumption and increased quality of service and user quality of experience levels. In order to meet 5G expectations, heterogeneous networks (HetNets) have been introduced. They involve deployment of additional low power nodes within the coverage area of conventional high power nodes and their placement closer to user underlay HetNets. Due to the increased density of small-cell networks and radio access technologies, radio resource management (RRM) for potential 5G HetNets has emerged as a critical avenue. It plays a pivotal role in enhancing spectrum utilization, load balancing, and network energy efficiency. In this paper, we summarize the key challenges i.e., cross-tier interference, co-tier interference, and user association-resource-power allocation (UA-RA-PA) emerging in 5G HetNets and highlight their significance. In addition, we present a comprehensive survey of RRM schemes based on interference management (IM), UA-RA-PA and combined approaches (UA-RA-PA + IM). We introduce a taxonomy for individual (IM, UA-RA-PA) and combined approaches as a framework for systematically studying the existing schemes. These schemes are also qualitatively analyzed and compared to each other. Finally, challenges and opportunities for RRM in 5G are outlined, and design guidelines along with possible solutions for advanced mechanisms are presented

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier
    corecore