850 research outputs found

    Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning

    Full text link
    Due to the ever-increasing popularity of resource-hungry and delay-constrained mobile applications, the computation and storage capabilities of remote cloud has partially migrated towards the mobile edge, giving rise to the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the close proximity to the end-users to provide services at reduced latency and lower energy costs, they suffer from limitations in computational and radio resources, which calls for fair efficient resource management in the MEC servers. The problem is however challenging due to the ultra-high density, distributed nature, and intrinsic randomness of next generation wireless networks. In this article, we focus on the application of game theory and reinforcement learning for efficient distributed resource management in MEC, in particular, for computation offloading. We briefly review the cutting-edge research and discuss future challenges. Furthermore, we develop a game-theoretical model for energy-efficient distributed edge server activation and study several learning techniques. Numerical results are provided to illustrate the performance of these distributed learning techniques. Also, open research issues in the context of resource management in MEC servers are discussed

    Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence

    Full text link
    Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, Edge Computing, is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate Edge Computing and AI, which gives birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial Intelligence on Edge). The former focuses on providing more optimal solutions to key problems in Edge Computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This paper provides insights into this new inter-disciplinary field from a broader perspective. It discusses the core concepts and the research road-map, which should provide the necessary background for potential future research initiatives in Edge Intelligence.Comment: 13 pages, 3 figure

    Extracting and Exploiting Inherent Sparsity for Efficient IoT Support in 5G: Challenges and Potential Solutions

    Full text link
    Besides enabling an enhanced mobile broadband, next generation of mobile networks (5G) are envisioned for the support of massive connectivity of heterogeneous Internet of Things (IoT)s. These IoTs are envisioned for a large number of use-cases including smart cities, environment monitoring, smart vehicles, etc. Unfortunately, most IoTs have very limited computing and storage capabilities and need cloud services. Hence, connecting these devices through 5G systems requires huge spectrum resources in addition to handling the massive connectivity and improved security. This article discusses the challenges facing the support of IoTs through 5G systems. The focus is devoted to discussing physical layer limitations in terms of spectrum resources and radio access channel connectivity. We show how sparsity can be exploited for addressing these challenges especially in terms of enabling wideband spectrum management and handling the connectivity by exploiting device-to-device communications and edge-cloud. Moreover, we identify major open problems and research directions that need to be explored towards enabling the support of massive heterogeneous IoTs through 5G systems.Comment: Accepted for publication in IEEE Wireless Communications Magazin

    Computation Rate Maximization for Wireless Powered Mobile-Edge Computing with Binary Computation Offloading

    Full text link
    In this paper, we consider a multi-user mobile edge computing (MEC) network powered by wireless power transfer (WPT), where each energy-harvesting WD follows a binary computation offloading policy, i.e., data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading. In particular, we are interested in maximizing the (weighted) sum computation rate of all the WDs in the network by jointly optimizing the individual computing mode selection (i.e., local computing or offloading) and the system transmission time allocation (on WPT and task offloading). The major difficulty lies in the combinatorial nature of multi-user computing mode selection and its strong coupling with transmission time allocation. To tackle this problem, we first consider a decoupled optimization, where we assume that the mode selection is given and propose a simple bi-section search algorithm to obtain the conditional optimal time allocation. On top of that, a coordinate descent method is devised to optimize the mode selection. The method is simple in implementation but may suffer from high computational complexity in a large-size network. To address this problem, we further propose a joint optimization method based on the ADMM (alternating direction method of multipliers) decomposition technique, which enjoys much slower increase of computational complexity as the networks size increases. Extensive simulations show that both the proposed methods can efficiently achieve near-optimal performance under various network setups, and significantly outperform the other representative benchmark methods considered.Comment: This paper has been accepted for publication in IEEE Transactions on Wireless Communication

    Wireless Powered User Cooperative Computation in Mobile Edge Computing Systems

    Full text link
    This paper studies a wireless powered mobile edge computing (MEC) system, where a dedicated energy transmitter (ET) uses the radio-frequency (RF) signal enabled wireless power transfer (WPT) to charge wireless devices for sustainable computation. In such a system, we present a new user cooperation approach to improve the computation performance of active devices, in which surrounding idle devices are enabled as helpers to use their opportunistically harvested wireless energy from the ET to help remotely execute active users' computation tasks. In particular, we consider a basic scenario with one user (with computation tasks to execute) and multiple helpers, in which the user can partition the computation tasks into various parts for local execution and computation offloading to helpers, respectively. Both the user and helpers are subject to the so-called energy neutrality constraints, such that their energy consumption does not exceed the respective energy harvested from the ET. Under this setup and considering a frequency division multiple access (FDMA) based computation offloading protocol, we maximize the computation rate (i.e., the number of computation bits over a particular time block) of the user, by jointly optimizing the transmit energy beamforming at the ET, as well as the communication and computation resource allocations at both the user and helpers. By leveraging the Lagrange duality method, we present the optimal solution to this problem in a semi-closed form. Numerical results show that the proposed wireless powered user cooperative computation design significantly improves the computation rate at the user, as compared to conventional schemes without such cooperation.Comment: 8 pages, 5 figures, accepted by Proc. IEEE GLOBECOM 2018 Workshop

    Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach

    Full text link
    Mobile edge computing (MEC) emerges recently as a promising solution to relieve resource-limited mobile devices from computation-intensive tasks, which enables devices to offload workloads to nearby MEC servers and improve the quality of computation experience. Nevertheless, by considering a MEC system consisting of multiple mobile users with stochastic task arrivals and wireless channels in this paper, the design of computation offloading policies is challenging to minimize the long-term average computation cost in terms of power consumption and buffering delay. A deep reinforcement learning (DRL) based decentralized dynamic computation offloading strategy is investigated to build a scalable MEC system with limited feedback. Specifically, a continuous action space-based DRL approach named deep deterministic policy gradient (DDPG) is adopted to learn efficient computation offloading policies independently at each mobile user. Thus, powers of both local execution and task offloading can be adaptively allocated by the learned policies from each user's local observation of the MEC system. Numerical results are illustrated to demonstrate that efficient policies can be learned at each user, and performance of the proposed DDPG based decentralized strategy outperforms the conventional deep Q-network (DQN) based discrete power control strategy and some other greedy strategies with reduced computation cost. Besides, the power-delay tradeoff is also analyzed for both the DDPG based and DQN based strategies

    Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

    Full text link
    This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper

    Computation Rate Maximization in UAV-Enabled Wireless Powered Mobile-Edge Computing Systems

    Full text link
    Mobile edge computing (MEC) and wireless power transfer (WPT) are two promising techniques to enhance the computation capability and to prolong the operational time of low-power wireless devices that are ubiquitous in Internet of Things. However, the computation performance and the harvested energy are significantly impacted by the severe propagation loss. In order to address this issue, an unmanned aerial vehicle (UAV)-enabled MEC wireless powered system is studied in this paper. The computation rate maximization problems in a UAV-enabled MEC wireless powered system are investigated under both partial and binary computation offloading modes, subject to the energy harvesting causal constraint and the UAV's speed constraint. These problems are non-convex and challenging to solve. A two-stage algorithm and a three-stage alternative algorithm are respectively proposed for solving the formulated problems. The closed-form expressions for the optimal central processing unit frequencies, user offloading time, and user transmit power are derived. The optimal selection scheme on whether users choose to locally compute or offload computation tasks is proposed for the binary computation offloading mode. Simulation results show that our proposed resource allocation schemes outperforms other benchmark schemes. The results also demonstrate that the proposed schemes converge fast and have low computational complexity.Comment: This paper has been accepted by IEEE JSA

    A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    Full text link
    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks including definition, architecture and advantages. Next, a comprehensive survey of issues on computing, caching and communication techniques at the network edge is presented respectively. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks such as cloud technology, SDN/NFV and smart devices are discussed. Finally, open research challenges and future directions are presented as well

    Exploiting Non-Causal CPU-State Information for Energy-Efficient Mobile Cooperative Computing

    Full text link
    Scavenging the idling computation resources at the enormous number of mobile devices can provide a powerful platform for local mobile cloud computing. The vision can be realized by peer-to-peer cooperative computing between edge devices, referred to as co-computing. This paper considers a co-computing system where a user offloads computation of input-data to a helper. The helper controls the offloading process for the objective of minimizing the user's energy consumption based on a predicted helper's CPU-idling profile that specifies the amount of available computation resource for co-computing. Consider the scenario that the user has one-shot input-data arrival and the helper buffers offloaded bits. The problem for energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning. Given a fixed offloaded data size, the adaptive offloading aims at minimizing the energy consumption for offloading by controlling the offloading rate under the deadline and buffer constraints. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policies and propose algorithms for computing the policies. Furthermore, we show that the problem of optimal data partitioning for offloading and local computing at the user is convex, admitting a simple solution using the sub-gradient method. Last, the developed design approach for co-computing is extended to the scenario of bursty data arrivals at the user accounting for data causality constraints. Simulation results verify the effectiveness of the proposed algorithms.Comment: Submitted to possible journa
    • …
    corecore