21,127 research outputs found

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Edge and Central Cloud Computing: A Perfect Pairing for High Energy Efficiency and Low-latency

    Get PDF
    In this paper, we study the coexistence and synergy between edge and central cloud computing in a heterogeneous cellular network (HetNet), which contains a multi-antenna macro base station (MBS), multiple multi-antenna small base stations (SBSs) and multiple single-antenna user equipment (UEs). The SBSs are empowered by edge clouds offering limited computing services for UEs, whereas the MBS provides high-performance central cloud computing services to UEs via a restricted multiple-input multiple-output (MIMO) backhaul to their associated SBSs. With processing latency constraints at the central and edge networks, we aim to minimize the system energy consumption used for task offloading and computation. The problem is formulated by jointly optimizing the cloud selection, the UEs' transmit powers, the SBSs' receive beamformers, and the SBSs' transmit covariance matrices, which is {a mixed-integer and non-convex optimization problem}. Based on methods such as decomposition approach and successive pseudoconvex approach, a tractable solution is proposed via an iterative algorithm. The simulation results show that our proposed solution can achieve great performance gain over conventional schemes using edge or central cloud alone. Also, with large-scale antennas at the MBS, the massive MIMO backhaul can significantly reduce the complexity of the proposed algorithm and obtain even better performance.Comment: Accepted in IEEE Transactions on Wireless Communication

    Seeing the Unobservable: Channel Learning for Wireless Communication Networks

    Full text link
    Wireless communication networks rely heavily on channel state information (CSI) to make informed decision for signal processing and network operations. However, the traditional CSI acquisition methods is facing many difficulties: pilot-aided channel training consumes a great deal of channel resources and reduces the opportunities for energy saving, while location-aided channel estimation suffers from inaccurate and insufficient location information. In this paper, we propose a novel channel learning framework, which can tackle these difficulties by inferring unobservable CSI from the observable one. We formulate this framework theoretically and illustrate a special case in which the learnability of the unobservable CSI can be guaranteed. Possible applications of channel learning are then described, including cell selection in multi-tier networks, device discovery for device-to-device (D2D) communications, as well as end-to-end user association for load balancing. We also propose a neuron-network-based algorithm for the cell selection problem in multi-tier networks. The performance of this algorithm is evaluated using geometry-based stochastic channel model (GSCM). In settings with 5 small cells, the average cell-selection accuracy is 73% - only a 3.9% loss compared with a location-aided algorithm which requires genuine location information.Comment: 6 pages, 4 figures, accepted by GlobeCom'1

    An Overview on Application of Machine Learning Techniques in Optical Networks

    Get PDF
    Today's telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users' behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, Machine Learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude the paper proposing new possible research directions
    • …
    corecore