47 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Self-organised admission control for multi-tenant 5G networks

    Get PDF
    The vision of the future 5G corresponds to a highly heterogeneous network at different levels, including multiple Radio Access Technologies (RATs), multiple cell layers, multiple spectrum bands, multiple types of devices and services, etc. Consequently, the overall RAN planning and optimization processes that constitute a key point for the success of the 5G concept will exhibit tremendous complexity. In this direction, legacy systems such as 2G/3G/4G already started the path towards a higher degree of automation in the planning and optimization processes through the introduction of SON functionalities. SON refers to a set of features and capabilities designed to reduce or remove the need for manual activities in the lifecycle of the network. With the introduction of SON, classical manual planning, deployment, optimization and maintenance activities of the network can be replaced and/or supported by more autonomous and automated processes, operating costs can be reduced and human errors minimized. In this work, a self-organizing admission control algorithm for multi-tenant 5G networks is proposed and developed with novel artificial intelligence techniques. A simulation-based analysis is presented to assess the improvements of the proposed approach with respect to a baseline scheme

    Self-organised admission control for multi-tenant 5G networks

    Get PDF
    The vision of the future 5G corresponds to a highly heterogeneous network at different levels, including multiple Radio Access Technologies (RATs), multiple cell layers, multiple spectrum bands, multiple types of devices and services, etc. Consequently, the overall RAN planning and optimization processes that constitute a key point for the success of the 5G concept will exhibit tremendous complexity. In this direction, legacy systems such as 2G/3G/4G already started the path towards a higher degree of automation in the planning and optimization processes through the introduction of SON functionalities. SON refers to a set of features and capabilities designed to reduce or remove the need for manual activities in the lifecycle of the network. With the introduction of SON, classical manual planning, deployment, optimization and maintenance activities of the network can be replaced and/or supported by more autonomous and automated processes, operating costs can be reduced and human errors minimized. In this work, a self-organizing admission control algorithm for multi-tenant 5G networks is proposed and developed with novel artificial intelligence techniques. A simulation-based analysis is presented to assess the improvements of the proposed approach with respect to a baseline scheme

    On the Intersection of Communication and Machine Learning

    Get PDF
    The intersection of communication and machine learning is attracting increasing interest from both communities. On the one hand, the development of modern communication system brings large amount of data and high performance requirement, which challenges the classic analytical-derivation based study philosophy and encourages the researchers to explore the data driven method, such as machine learning, to solve the problems with high complexity and large scale. On the other hand, the usage of distributed machine learning introduces the communication cost as one of the basic considerations for the design of machine learning algorithm and system.In this thesis, we first explore the application of machine learning on one of the classic problems in wireless network, resource allocation, for heterogeneous millimeter wave networks when the environment is with high dynamics. We address the practical concerns by providing the efficient online and distributed framework. In the second part, some sampling based communication-efficient distributed learning algorithm is proposed. We utilize the trade-off between the local computation and the total communication cost and propose the algorithm with good theoretical bound. In more detail, this thesis makes the following contributionsWe introduced an reinforcement learning framework to solve the resource allocation problems in heterogeneous millimeter wave network. The large state/action space is decomposed according to the topology of the network and solved by an efficient distribtued message passing algorithm. We further speed up the inference process by an online updating process.We proposed the distributed coreset based boosting framework. An efficient coreset construction algorithm is proposed based on the prior knowledge provided by clustering. Then the coreset is integrated with boosting with improved convergence rate. We extend the proposed boosting framework to the distributed setting, where the communication cost is reduced by the good approximation of coreset.We propose an selective sampling framework to construct a subset of sample that could effectively represent the model space. Based on the prior distribution of the model space or the large amount of samples from model space, we derive a computational efficient method to construct such subset by minimizing the error of classifying a classifier

    Recent advances in radio resource management for heterogeneous LTE/LTE-A networks

    Get PDF
    As heterogeneous networks (HetNets) emerge as one of the most promising developments toward realizing the target specifications of Long Term Evolution (LTE) and LTE-Advanced (LTE-A) networks, radio resource management (RRM) research for such networks has, in recent times, been intensively pursued. Clearly, recent research mainly concentrates on the aspect of interference mitigation. Other RRM aspects, such as radio resource utilization, fairness, complexity, and QoS, have not been given much attention. In this paper, we aim to provide an overview of the key challenges arising from HetNets and highlight their importance. Subsequently, we present a comprehensive survey of the RRM schemes that have been studied in recent years for LTE/LTE-A HetNets, with a particular focus on those for femtocells and relay nodes. Furthermore, we classify these RRM schemes according to their underlying approaches. In addition, these RRM schemes are qualitatively analyzed and compared to each other. We also identify a number of potential research directions for future RRM development. Finally, we discuss the lack of current RRM research and the importance of multi-objective RRM studies

    Learning Decentralized Wireless Resource Allocations with Graph Neural Networks

    Full text link
    We consider the broad class of decentralized optimal resource allocation problems in wireless networks, which can be formulated as a constrained statistical learning problems with a localized information structure. We develop the use of Aggregation Graph Neural Networks (Agg-GNNs), which process a sequence of delayed and potentially asynchronous graph aggregated state information obtained locally at each transmitter from multi-hop neighbors. We further utilize model-free primal-dual learning methods to optimize performance subject to constraints in the presence of delay and asynchrony inherent to decentralized networks. We demonstrate a permutation equivariance property of the resulting resource allocation policy that can be shown to facilitate transference to dynamic network configurations. The proposed framework is validated with numerical simulations that exhibit superior performance to baseline strategies.Comment: 13 pages, 13 figure

    Graph neural network-based cell switching for energy optimization in ultra-dense heterogeneous networks

    Get PDF
    The development of ultra-dense heterogeneous networks (HetNets) will cause a significant rise in energy consumption with large-scale base station (BS) deployments, requiring cellular networks to be more energy efficient to reduce operational expense and promote sustainability. Cell switching is an effective method to achieve the energy efficiency goals, but traditional heuristic cell switching algorithms are computationally demanding with limited generalization abilities for ultra-dense HetNet applications, motivating the usage of machine learning techniques for adaptive cell switching. Graph neural networks (GNNs) are powerful deep learning models with strong generalization abilities but receive little attention for cell switching. This paper proposes a GNN-based cell switching solution (GBCSS) that has a smaller computational complexity than existing heuristic algorithms. The presented performance evaluation uses the Milan telecommunication dataset based on real-world call detail records, comparing GBCSS with a traditional exhaustive search (ES) algorithm, a state-of-the-art learning-based algorithm, and the baseline without cell switching. Results indicate that GBCSS achieves a 10.41% energy efficiency gain when compared with the baseline and achieves 75.76% of the optimal performance obtained with ES algorithm. The results also demonstrate GBCSS’ significant scalability and generalization abilities to differing load conditions and the number of BSs, suggesting this approach is well-suited to ultra-dense HetNet deployment
    corecore