2,865 research outputs found

    Machine Learning for Intelligent IoT Networks with Edge Computing

    Get PDF
    The intelligent Internet of Things (IoT) network is envisioned to be the internet of intelligent things. In this paradigm, billions of end devices with internet connectivity will provide interactive intelligence and revolutionise the current wireless communications. In the intelligent IoT networks, the unprecedented volume and variety of data is generated, making centralized cloud computing ine cient or even infeasible due to network congestion, resource-limited IoT devices, ultra-low latency applications and spectrum scarcity. Edge computing has been proposed to overcome these issues by pushing centralized communication and computation resource physically and logically closer to data providers and end users. However, compared with a cloud server, an edge server only provides nite computation and spectrum resource, making proper data processing and e cient resource allocation necessary. Machine learning techniques have been developed to solve the dynamic and complex problems and big data analysis in IoT networks. Speci - cally, Reinforcement Learning (RL) has been widely explored to address the dynamic decision making problems, which motivates the research on machine learning enabled computation o oading and resource management. In this thesis, several original contributions are presented to nd the solutions and address the challenges. First, e cient spectrum and power allocation are investigated for computation o oading in wireless powered IoT networks. The IoT users o oad all the collected data to the central server for better data processing experience. Then a matching theory-based e cient channel allocation algorithm and a RL-based power allocation mechanism are proposed. Second, the joint optimization problem of computation o oading and resource allocation is investigated for the IoT edge computing networks via machine learning techniques. The IoT users choose to o oad the intensive computation tasks to the edge server while keep simple task execution locally. In this case, a centralized user clustering algorithm is rst proposed as a pre-step to group the IoT users into di erent clusters according to user priorities for achieving spectrum allocation. Then the joint computation o oading, computation resource and power allocation for each IoT user is formulated as an RL framework and solved by proposing a deep Q-network based computation o oading algorithm. At last, to solve the simultaneous multiuser computation o oading problem, a stochastic game is exploited to formulate the joint problem of computation o oading mechanism of multiple sel sh users and resource (including spectrum, computation and radio access technologies resources) allocation into a non-cooperative multiuser computation o oading game. Therefore, a multi-agent RL framework is developed to solve the formulated game by proposing an independent learners based multi-agent Q-learning algorithm

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig
    • …
    corecore