42 research outputs found

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Optimization Modeling and Machine Learning Techniques Towards Smarter Systems and Processes

    Get PDF
    The continued penetration of technology in our daily lives has led to the emergence of the concept of Internet-of-Things (IoT) systems and networks. An increasing number of enterprises and businesses are adopting IoT-based initiatives expecting that it will result in higher return on investment (ROI) [1]. However, adopting such technologies poses many challenges. One challenge is improving the performance and efficiency of such systems by properly allocating the available and scarce resources [2, 3]. A second challenge is making use of the massive amount of data generated to help make smarter and more informed decisions [4]. A third challenge is protecting such devices and systems given the surge in security breaches and attacks in recent times [5]. To that end, this thesis proposes the use of various optimization modeling and machine learning techniques in three different systems; namely wireless communication systems, learning management systems (LMSs), and computer network systems. In par- ticular, the first part of the thesis posits optimization modeling techniques to improve the aggregate throughput and power efficiency of a wireless communication network. On the other hand, the second part of the thesis proposes the use of unsupervised machine learning clustering techniques to be integrated into LMSs to identify unengaged students based on their engagement with material in an e-learning environment. Lastly, the third part of the thesis suggests the use of exploratory data analytics, unsupervised machine learning clustering, and supervised machine learning classification techniques to identify malicious/suspicious domain names in a computer network setting. The main contributions of this thesis can be divided into three broad parts. The first is developing optimal and heuristic scheduling algorithms that improve the performance of wireless systems in terms of throughput and power by combining wireless resource virtualization with device-to-device and machine-to-machine communications. The second is using unsupervised machine learning clustering and association algorithms to determine an appropriate engagement level model for blended e-learning environments and study the relationship between engagement and academic performance in such environments. The third is developing a supervised ensemble learning classifier to detect malicious/suspicious domain names that achieves high accuracy and precision

    A survey of multi-access edge computing in 5G and beyond : fundamentals, technology integration, and state-of-the-art

    Get PDF
    Driven by the emergence of new compute-intensive applications and the vision of the Internet of Things (IoT), it is foreseen that the emerging 5G network will face an unprecedented increase in traffic volume and computation demands. However, end users mostly have limited storage capacities and finite processing capabilities, thus how to run compute-intensive applications on resource-constrained users has recently become a natural concern. Mobile edge computing (MEC), a key technology in the emerging fifth generation (5G) network, can optimize mobile resources by hosting compute-intensive applications, process large data before sending to the cloud, provide the cloud-computing capabilities within the radio access network (RAN) in close proximity to mobile users, and offer context-aware services with the help of RAN information. Therefore, MEC enables a wide variety of applications, where the real-time response is strictly required, e.g., driverless vehicles, augmented reality, robotics, and immerse media. Indeed, the paradigm shift from 4G to 5G could become a reality with the advent of new technological concepts. The successful realization of MEC in the 5G network is still in its infancy and demands for constant efforts from both academic and industry communities. In this survey, we first provide a holistic overview of MEC technology and its potential use cases and applications. Then, we outline up-to-date researches on the integration of MEC with the new technologies that will be deployed in 5G and beyond. We also summarize testbeds and experimental evaluations, and open source activities, for edge computing. We further summarize lessons learned from state-of-the-art research works as well as discuss challenges and potential future directions for MEC research

    A survey of online data-driven proactive 5G network optimisation using machine learning

    Get PDF
    In the fifth-generation (5G) mobile networks, proactive network optimisation plays an important role in meeting the exponential traffic growth, more stringent service requirements, and to reduce capitaland operational expenditure. Proactive network optimisation is widely acknowledged as on e of the most promising ways to transform the 5G network based on big data analysis and cloud-fog-edge computing, but there are many challenges. Proactive algorithms will require accurate forecasting of highly contextualised traffic demand and quantifying the uncertainty to drive decision making with performance guarantees. Context in Cyber-Physical-Social Systems (CPSS) is often challenging to uncover, unfolds over time, and even more difficult to quantify and integrate into decision making. The first part of the review focuses on mining and inferring CPSS context from heterogeneous data sources, such as online user-generated-content. It will examine the state-of-the-art methods currently employed to infer location, social behaviour, and traffic demand through a cloud-edge computing framework; combining them to form the input to proactive algorithms. The second part of the review focuses on exploiting and integrating the demand knowledge for a range of proactive optimisation techniques, including the key aspects of load balancing, mobile edge caching, and interference management. In both parts, appropriate state-of-the-art machine learning techniques (including probabilistic uncertainty cascades in proactive optimisation), complexity-performance trade-offs, and demonstrative examples are presented to inspire readers. This survey couples the potential of online big data analytics, cloud-edge computing, statistical machine learning, and proactive network optimisation in a common cross-layer wireless framework. The wider impact of this survey includes better cross-fertilising the academic fields of data analytics, mobile edge computing, AI, CPSS, and wireless communications, as well as informing the industry of the promising potentials in this area

    Internet Of Things and Humans

    Get PDF
    The never ending demand for capacity and the need for ubiquitous radio coverage requires attention to the design of new radio networks. Incoming paradigms (industry 4.0, machine to machine communication and Internet of Things) will overburden even more cellular networks. Current (4G) and near-future (5G) architecture will not be able to support such traffic increase. Moreover, space-time and content heterogeneity of data should be exploited to improve network performance. However, current networks performance are deteriorated by this heterogeneity. Pico- and femto-cell networks, with cell densification, are proposed as solution. A drawback, is the urgency of high-speed backhaul to connect the cells among themselves and the core network. Current research trends assume that the density of cells will be comparable to user density. In such a situation, deploying high-speed backhaul will be expensive. Moreover, regardless whatever deployment of cells, connectivity is a commodity given as always granted. Modern technologies and services rely on stable networks. Nonetheless, whenever also a basic connectivity fails because of a disaster, not even a basic form of radio communication can be provided. Flexible networks adapting to the environment "on the go", could reduce this problem. A to alleviate the aforementioned problems, My work unfolds starting from a couple of intuitions. 1- Traffic demand is not just a data to be processed, transmitted and answered to. The kind of data producing the traffic matters. Thus, we should treat different traffic streams accordingly. This facet of my work is treated under different points of view in the dissertation. 2- In current networks, users are seen as "passive", being just source and/or destination of a traffic stream. There are reasons to envision that users could be exploited as "active" users participating to the network itself fostering its performance. This considerations are accounted in the so called Delay Tolerant Networks

    Reinforcement Learning Based Resource Allocation for Energy-Harvesting-Aided D2D Communications in IoT Networks

    Get PDF
    It is anticipated that mobile data traffic and the demand for higher data rates will increase dramatically as a result of the explosion of wireless devices, such as the Internet of Things (IoT) and machine-to-machine communication. There are numerous location-based peer-to-peer services available today that allow mobile users to communicate directly with one another, which can help offload traffic from congested cellular networks. In cellular networks, Device-to-Device (D2D) communication has been introduced to exploit direct links between devices instead of transmitting through a the Base Station (BS). However, it is critical to note that D2D and IoT communications are hindered heavily by the high energy consumption of mobile devices and IoT devices. This is because their battery capacity is restricted. There may be a way for energy-constrained wireless devices to extend their lifespan by drawing upon reusable external sources of energy such as solar, wind, vibration, thermoelectric, and radio frequency (RF) energy in order to overcome the limited battery problem. Such approaches are commonly referred to as Energy Harvesting (EH) There is a promising approach to energy harvesting that is called Simultaneous Wireless Information and Power Transfer (SWIPT). Due to the fact that wireless users are on the rise, it is imperative that resource allocation techniques be implemented in modern wireless networks. This will facilitate cooperation among users for limited resources, such as time and frequency bands. As well as ensuring that there is an adequate supply of energy for reliable and efficient communication, resource allocation also provides a roadmap for each individual user to follow in order to consume the right amount of energy. In D2D networks with time, frequency, and power constraints, significant computing power is generally required to achieve a joint resource management design. Thus the purpose of this study is to develop a resource allocation scheme that is based on spectrum sharing and enables low-cost computations for EH-assisted D2D and IoT communication. Until now, there has been no study examining resource allocation design for EH-enabled IoT networks with SWIPT-enabled D2D schemes that utilize learning techniques and convex optimization. In most of the works, optimization and iterative approaches with a high level of computational complexity have been used which is not feasible in many IoT applications. In order to overcome these obstacles, a learning-based resource allocation mechanism based on the SWIPT scheme in IoT networks is proposed, where users are able to harvest energy from different sources. The system model consists of multiple IoT users, one BS, and multiple D2D pairs in EH-based IoT networks. As a means of developing an energy-efficient system, we consider the SWIPT scheme with D2D pairs employing the time switching method (TS) to capture energy from the environment, whereas IoT users employ the power splitting method (PS) to harvest energy from the BS. A mixed-integer nonlinear programming (MINLP) approach is presented for the solution of the Energy Efficiency (EE) problem by jointly optimizing subchannel allocation, power-splitting factor, power, and time together. As part of the optimization approach, the original EE optimization problem is decomposed into three subproblems, namely: (a) subchannel assignment and power splitting factor, (b) power allocation, and (c) time allocation. In order to solve the subproblem assignment problem, which involves discrete variables, the Q-learning approach is employed. Due to the large size of the overall problem and the continuous nature of certain variables, it is impractical to optimize all variables by using the learning technique. Instead dealing for the continuous variable problems, namely power and time allocation, the original non-convex problem is first transformed into a convex one, then the Majorization-Minimization (MM) approach is applied as well as the Dinkelbach. The performance of the proposed joint Q-learning and optimization algorithm has been evaluated in detail. In particular, the solution was compared with a linear EH model, as well as two heuristic algorithms, namely the constrained allocation algorithm and the random allocation algorithm, in order to determine its performance. The results indicate that the technique is superior to conventional approaches. For example, it can be seen that for the distance of d=10d = 10 m, our proposed algorithm leads to EE improvement when compared to the method such as prematching algorithm, constrained allocation, and random allocation methods by about 5.26\%, 110.52\%, and 143.90\%, respectively. Considering the simulation results, the proposed algorithm is superior to other methods in the literature. Using spectrum sharing and harvesting energy from D2D and IoT devices achieves impressive EE gains. This superior performance can be seen both in terms of the average and sum EEs, as well as when compared to other baseline schemes
    corecore