926 research outputs found

    Stacked Auto Encoder Based Deep Reinforcement Learning for Online Resource Scheduling in Large-Scale MEC Networks

    Get PDF
    An online resource scheduling framework is proposed for minimizing the sum of weighted task latency for all the Internet-of-Things (IoT) users, by optimizing offloading decision, transmission power, and resource allocation in the large-scale mobile-edge computing (MEC) system. Toward this end, a deep reinforcement learning (DRL)-based solution is proposed, which includes the following components. First, a related and regularized stacked autoencoder (2r-SAE) with unsupervised learning is applied to perform data compression and representation for high-dimensional channel quality information (CQI) data, which can reduce the state space for DRL. Second, we present an adaptive simulated annealing approach (ASA) as the action search method of DRL, in which an adaptive h -mutation is used to guide the search direction and an adaptive iteration is proposed to enhance the search efficiency during the DRL process. Third, a preserved and prioritized experience replay (2p-ER) is introduced to assist the DRL to train the policy network and find the optimal offloading policy. The numerical results are provided to demonstrate that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computational time compared with existing benchmarks

    A Novel Cross Entropy Approach for Offloading Learning in Mobile Edge Computing

    Get PDF
    In this letter, we propose a novel offloading learning approach to compromise energy consumption and latency in a multi-tier network with mobile edge computing. In order to solve this integer programming problem, instead of using conventional optimization tools, we apply a cross entropy approach with iterative learning of the probability of elite solution samples. Compared to existing methods, the proposed one in this network permits a parallel computing architecture and is verified to be computationally very efficient. Specifically, it achieves performance close to the optimal and performs well with different choices of the values of hyperparameters in the proposed learning approach

    Stochastic Control of Computation Offloading to a Helper with a Dynamically Loaded CPU

    Full text link
    Due to densification of wireless networks, there exist abundance of idling computation resources at edge devices. These resources can be scavenged by offloading heavy computation tasks from small IoT devices in proximity, thereby overcoming their limitations and lengthening their battery lives. However, unlike dedicated servers, the spare resources offered by edge helpers are random and intermittent. Thus, it is essential for a user to intelligently control the amounts of data for offloading and local computing so as to ensure a computation task can be finished in time consuming minimum energy. In this paper, we design energy-efficient control policies in a computation offloading system with a random channel and a helper with a dynamically loaded CPU. Specifically, the policy adopted by the helper aims at determining the sizes of offloaded and locally-computed data for a given task in different slots such that the total energy consumption for transmission and local CPU is minimized under a task-deadline constraint. As the result, the polices endow an offloading user robustness against channel-and-helper randomness besides balancing offloading and local computing. By modeling the channel and helper-CPU as Markov chains, the problem of offloading control is converted into a Markov-decision process. Though dynamic programming (DP) for numerically solving the problem does not yield the optimal policies in closed form, we leverage the procedure to quantify the optimal policy structure and apply the result to design optimal or sub-optimal policies. For different cases ranging from zero to large buffers, the low-complexity of the policies overcomes the "curse-of-dimensionality" in DP arising from joint consideration of channel, helper CPU and buffer states.Comment: This ongoing work has been submitted to the IEEE for possible publicatio

    Service Capacity Enhanced Task Offloading and Resource Allocation in Multi-Server Edge Computing Environment

    Full text link
    An edge computing environment features multiple edge servers and multiple service clients. In this environment, mobile service providers can offload client-side computation tasks from service clients' devices onto edge servers to reduce service latency and power consumption experienced by the clients. A critical issue that has yet to be properly addressed is how to allocate edge computing resources to achieve two optimization objectives: 1) minimize the service cost measured by the service latency and the power consumption experienced by service clients; and 2) maximize the service capacity measured by the number of service clients that can offload their computation tasks in the long term. This paper formulates this long-term problem as a stochastic optimization problem and solves it with an online algorithm based on Lyapunov optimization. This NP-hard problem is decomposed into three sub-problems, which are then solved with a suite of techniques. The experimental results show that our approach significantly outperforms two baseline approaches.Comment: This paper has been accepted by Early Submission Phase of ICWS201

    Exploiting Massive D2D Collaboration for Energy-Efficient Mobile Edge Computing

    Full text link
    In this article we propose a novel Device-to-Device (D2D) Crowd framework for 5G mobile edge computing, where a massive crowd of devices at the network edge leverage the network-assisted D2D collaboration for computation and communication resource sharing among each other. A key objective of this framework is to achieve energy-efficient collaborative task executions at network-edge for mobile users. Specifically, we first introduce the D2D Crowd system model in details, and then formulate the energy-efficient D2D Crowd task assignment problem by taking into account the necessary constraints. We next propose a graph matching based optimal task assignment policy, and further evaluate its performance through extensive numerical study, which shows a superior performance of more than 50% energy consumption reduction over the case of local task executions. Finally, we also discuss the directions of extending the D2D Crowd framework by taking into variety of application factors.Comment: Xu Chen, Lingjun Pu, Lin Gao, Weigang Wu, and Di Wu, "Exploiting Massive D2D Collaboration for Energy-Efficient Mobile Edge Computing," accepted by IEEE Wireless Communications, 201

    Joint Optimal Software Caching, Computation Offloading and Communications Resource Allocation for Mobile Edge Computing

    Full text link
    As software may be used by multiple users, caching popular software at the wireless edge has been considered to save computation and communications resources for mobile edge computing (MEC). However, fetching uncached software from the core network and multicasting popular software to users have so far been ignored. Thus, existing design is incomplete and less practical. In this paper, we propose a joint caching, computation and communications mechanism which involves software fetching, caching and multicasting, as well as task input data uploading, task executing (with non-negligible time duration) and computation result downloading, and mathematically characterize it. Then, we optimize the joint caching, offloading and time allocation policy to minimize the weighted sum energy consumption subject to the caching and deadline constraints. The problem is a challenging two-timescale mixed integer nonlinear programming (MINLP) problem, and is NP-hard in general. We convert it into an equivalent convex MINLP problem by using some appropriate transformations and propose two low-complexity algorithms to obtain suboptimal solutions of the original non-convex MINLP problem. Specifically, the first suboptimal solution is obtained by solving a relaxed convex problem using the consensus alternating direction method of multipliers (ADMM), and then rounding its optimal solution properly. The second suboptimal solution is proposed by obtaining a stationary point of an equivalent difference of convex (DC) problem using the penalty convex-concave procedure (Penalty-CCP) and ADMM. Finally, by numerical results, we show that the proposed solutions outperform existing schemes and reveal their advantages in efficiently utilizing storage, computation and communications resources.Comment: To appear in IEEE Trans. Veh. Technol., 202

    Energy Efficient Mobile Cloud Computing Powered by Wireless Energy Transfer (extended version)

    Full text link
    Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive low-complexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability, under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control.Comment: double colum

    Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing

    Full text link
    With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution. The resulted new inter-discipline, edge AI or edge intelligence, is beginning to receive a tremendous amount of interest. However, research on edge intelligence is still in its infancy stage, and a dedicated venue for exchanging the recent advances of edge intelligence is highly desired by both the computer system and artificial intelligence communities. To this end, we conduct a comprehensive survey of the recent research efforts on edge intelligence. Specifically, we first review the background and motivation for artificial intelligence running at the network edge. We then provide an overview of the overarching architectures, frameworks and emerging key technologies for deep learning model towards training/inference at the network edge. Finally, we discuss future research opportunities on edge intelligence. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions and inspire further research ideas on edge intelligence.Comment: Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang, "Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing," Proceedings of the IEE

    CloudAR: A Cloud-based Framework for Mobile Augmented Reality

    Full text link
    Computation capabilities of recent mobile devices enable natural feature processing for Augmented Reality (AR). However, mobile AR applications are still faced with scalability and performance challenges. In this paper, we propose CloudAR, a mobile AR framework utilizing the advantages of cloud and edge computing through recognition task offloading. We explore the design space of cloud-based AR exhaustively and optimize the offloading pipeline to minimize the time and energy consumption. We design an innovative tracking system for mobile devices which provides lightweight tracking in 6 degree of freedom (6DoF) and hides the offloading latency from users' perception. We also design a multi-object image retrieval pipeline that executes fast and accurate image recognition tasks on servers. In our evaluations, the mobile AR application built with the CloudAR framework runs at 30 frames per second (FPS) on average with precise tracking of only 1~2 pixel errors and image recognition of at least 97% accuracy. Our results also show that CloudAR outperforms one of the leading commercial AR framework in several performance metrics

    A Generic Framework for Task Offloading in mmWave MEC Backhaul Networks

    Full text link
    With the emergence of millimeter-Wave (mmWave) communication technology, the capacity of mobile backhaul networks can be significantly increased. On the other hand, Mobile Edge Computing (MEC) provides an appropriate infrastructure to offload latency-sensitive tasks. However, the amount of resources in MEC servers is typically limited. Therefore, it is important to intelligently manage the MEC task offloading by optimizing the backhaul bandwidth and edge server resource allocation in order to decrease the overall latency of the offloaded tasks. This paper investigates the task allocation problem in MEC environment, where the mmWave technology is used in the backhaul network. We formulate a Mixed Integer NonLinear Programming (MINLP) problem with the goal to minimize the total task serving time. Its objective is to determine an optimized network topology, identify which server is used to process a given offloaded task, find the path of each user task, and determine the allocated bandwidth to each task on mmWave backhaul links. Because the problem is difficult to solve, we develop a two-step approach. First, a Mixed Integer Linear Program (MILP) determining the network topology and the routing paths is optimally solved. Then, the fractions of bandwidth allocated to each user task are optimized by solving a quasi-convex problem. Numerical results illustrate the obtained topology and routing paths for selected scenarios and show that optimizing the bandwidth allocation significantly improves the total serving time, particularly for bandwidth-intensive tasks
    corecore