946 research outputs found

    A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    Full text link
    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks including definition, architecture and advantages. Next, a comprehensive survey of issues on computing, caching and communication techniques at the network edge is presented respectively. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks such as cloud technology, SDN/NFV and smart devices are discussed. Finally, open research challenges and future directions are presented as well

    Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach

    Full text link
    Mobile edge computing (MEC) emerges recently as a promising solution to relieve resource-limited mobile devices from computation-intensive tasks, which enables devices to offload workloads to nearby MEC servers and improve the quality of computation experience. Nevertheless, by considering a MEC system consisting of multiple mobile users with stochastic task arrivals and wireless channels in this paper, the design of computation offloading policies is challenging to minimize the long-term average computation cost in terms of power consumption and buffering delay. A deep reinforcement learning (DRL) based decentralized dynamic computation offloading strategy is investigated to build a scalable MEC system with limited feedback. Specifically, a continuous action space-based DRL approach named deep deterministic policy gradient (DDPG) is adopted to learn efficient computation offloading policies independently at each mobile user. Thus, powers of both local execution and task offloading can be adaptively allocated by the learned policies from each user's local observation of the MEC system. Numerical results are illustrated to demonstrate that efficient policies can be learned at each user, and performance of the proposed DDPG based decentralized strategy outperforms the conventional deep Q-network (DQN) based discrete power control strategy and some other greedy strategies with reduced computation cost. Besides, the power-delay tradeoff is also analyzed for both the DDPG based and DQN based strategies

    Energy-Efficient Joint Offloading and Wireless Resource Allocation Strategy in Multi-MEC Server Systems

    Full text link
    Mobile edge computing (MEC) is an emerging paradigm that mobile devices can offload the computation-intensive or latency-critical tasks to the nearby MEC servers, so as to save energy and extend battery life. Unlike the cloud server, MEC server is a small-scale data center deployed at a wireless access point, thus it is highly sensitive to both radio and computing resource. In this paper, we consider an Orthogonal Frequency-Division Multiplexing Access (OFDMA) based multi-user and multi-MEC-server system, where the task offloading strategies and wireless resources allocation are jointly investigated. Aiming at minimizing the total energy consumption, we propose the joint offloading and resource allocation strategy for latency-critical applications. Through the bi-level optimization approach, the original NP-hard problem is decoupled into the lower-level problem seeking for the allocation of power and subcarrier and the upper-level task offloading problem. Simulation results show that the proposed algorithm achieves excellent performance in energy saving and successful offloading probability (SOP) in comparison with conventional schemes.Comment: 6 pages, 5 figures, to appear in IEEE ICC 2018, May 20-2

    Optimal Task Offloading and Resource Allocation in Mobile-Edge Computing with Inter-user Task Dependency

    Full text link
    Mobile-edge computing (MEC) has recently emerged as a cost-effective paradigm to enhance the computing capability of hardware-constrained wireless devices (WDs). In this paper, we first consider a two-user MEC network, where each WD has a sequence of tasks to execute. In particular, we consider task dependency between the two WDs, where the input of a task at one WD requires the final task output at the other WD. Under the considered task-dependency model, we study the optimal task offloading policy and resource allocation (e.g., on offloading transmit power and local CPU frequencies) that minimize the weighted sum of the WDs' energy consumption and task execution time. The problem is challenging due to the combinatorial nature of the offloading decisions among all tasks and the strong coupling with resource allocation. To tackle this problem, we first assume that the offloading decisions are given and derive the closed-form expressions of the optimal offloading transmit power and local CPU frequencies. Then, an efficient bi-section search method is proposed to obtain the optimal solutions. Furthermore, we prove that the optimal offloading decisions follow an one-climb policy, based on which a reduced-complexity Gibbs Sampling algorithm is proposed to obtain the optimal offloading decisions. We then extend the investigation to a general multi-user scenario, where the input of a task at one WD requires the final task outputs from multiple other WDs. Numerical results show that the proposed method can significantly outperform the other representative benchmarks and efficiently achieve low complexity with respect to the call graph size.Comment: This paper has been accepted for publication in IEEE Transactions on Wireless Communication

    All One Needs to Know about Fog Computing and Related Edge Computing Paradigms: A Complete Survey

    Full text link
    With the Internet of Things (IoT) becoming part of our daily life and our environment, we expect rapid growth in the number of connected devices. IoT is expected to connect billions of devices and humans to bring promising advantages for us. With this growth, fog computing, along with its related edge computing paradigms, such as multi-access edge computing (MEC) and cloudlet, are seen as promising solutions for handling the large volume of security-critical and time-sensitive data that is being produced by the IoT. In this paper, we first provide a tutorial on fog computing and its related computing paradigms, including their similarities and differences. Next, we provide a taxonomy of research topics in fog computing, and through a comprehensive survey, we summarize and categorize the efforts on fog computing and its related computing paradigms. Finally, we provide challenges and future directions for research in fog computing.Comment: 48 pages, 7 tables, 11 figures, 450 references. The data (categories and features/objectives of the papers) of this survey are now available publicly. Accepted by Elsevier Journal of Systems Architectur

    Resource Sharing of a Computing Access Point for Multi-user Mobile Cloud Offloading with Delay Constraints

    Full text link
    We consider a mobile cloud computing system with multiple users, a remote cloud server, and a computing access point (CAP). The CAP serves both as the network access gateway and a computation service provider to the mobile users. It can either process the received tasks from mobile users or offload them to the cloud. We jointly optimize the offloading decisions of all users, together with the allocation of computation and communication resources, to minimize the overall cost of energy consumption, computation, and maximum delay among users. The joint optimization problem is formulated as a mixed-integer program. We show that the problem can be reformulated and transformed into a non-convex quadratically constrained quadratic program, which is NP-hard in general. We then propose an efficient solution to this problem by semidefinite relaxation and a novel randomization mapping method. Furthermore, when there is a strict delay constraint for processing each user's task, we further propose a three-step algorithm to guarantee the feasibility and local optimality of the obtained solution. Our simulation results show that the proposed solutions give nearly optimal performance under a wide range of parameter settings, and the addition of a CAP can significantly reduce the cost of multi-user task offloading compared with conventional mobile cloud computing where only the remote cloud server is available.Comment: in IEEE Transactions on Mobile Computing, 201

    Air-Ground Integrated Mobile Edge Networks: Architecture, Challenges and Opportunities

    Full text link
    The ever-increasing mobile data demands have posed significant challenges in the current radio access networks, while the emerging computation-heavy Internet of things (IoT) applications with varied requirements demand more flexibility and resilience from the cloud/edge computing architecture. In this article, to address the issues, we propose a novel air-ground integrated mobile edge network (AGMEN), where UAVs are flexibly deployed and scheduled, and assist the communication, caching, and computing of the edge network. In specific, we present the detailed architecture of AGMEN, and investigate the benefits and application scenarios of drone-cells, and UAV-assisted edge caching and computing. Furthermore, the challenging issues in AGMEN are discussed, and potential research directions are highlighted.Comment: Accepted by IEEE Communications Magazine. 5 figure

    DeepWear: Adaptive Local Offloading for On-Wearable Deep Learning

    Full text link
    Due to their on-body and ubiquitous nature, wearables can generate a wide range of unique sensor data creating countless opportunities for deep learning tasks. We propose DeepWear, a deep learning (DL) framework for wearable devices to improve the performance and reduce the energy footprint. DeepWear strategically offloads DL tasks from a wearable device to its paired handheld device through local network. Compared to the remote-cloud-based offloading, DeepWear requires no Internet connectivity, consumes less energy, and is robust to privacy breach. DeepWear provides various novel techniques such as context-aware offloading, strategic model partition, and pipelining support to efficiently utilize the processing capacity from nearby paired handhelds. Deployed as a user-space library, DeepWear offers developer-friendly APIs that are as simple as those in traditional DL libraries such as TensorFlow. We have implemented DeepWear on the Android OS and evaluated it on COTS smartphones and smartwatches with real DL models. DeepWear brings up to 5.08X and 23.0X execution speedup, as well as 53.5% and 85.5% energy saving compared to wearable-only and handheld-only strategies, respectively

    Base Station ON-OFF Switching in 5G Wireless Networks: Approaches and Challenges

    Full text link
    To achieve the expected 1000x data rates under the exponential growth of traffic demand, a large number of base stations (BS) or access points (AP) will be deployed in the fifth generation (5G) wireless systems, to support high data rate services and to provide seamless coverage. Although such BSs are expected to be small-scale with lower power, the aggregated energy consumption of all BSs would be remarkable, resulting in increased environmental and economic concerns. In existing cellular networks, turning off the under-utilized BSs is an efficient approach to conserve energy while preserving the quality of service (QoS) of mobile users. However, in 5G systems with new physical layer techniques and the highly heterogeneous network architecture, new challenges arise in the design of BS ON-OFF switching strategies. In this article, we begin with a discussion on the inherent technical challenges of BS ON-OFF switching. We then provide a comprehensive review of recent advances on switching mechanisms in different application scenarios. Finally, we present open research problems and conclude the paper.Comment: Appear to IEEE Wireless Communications, 201

    Optimized Computation Offloading Performance in Virtual Edge Computing Systems via Deep Reinforcement Learning

    Full text link
    To improve the quality of computation experience for mobile devices, mobile-edge computing (MEC) is a promising paradigm by providing computing capabilities in close proximity within a sliced radio access network (RAN), which supports both traditional communication and MEC services. Nevertheless, the design of computation offloading policies for a virtual MEC system remains challenging. Specifically, whether to execute a computation task at the mobile device or to offload it for MEC server execution should adapt to the time-varying network dynamics. In this paper, we consider MEC for a representative mobile user in an ultra-dense sliced RAN, where multiple base stations (BSs) are available to be selected for computation offloading. The problem of solving an optimal computation offloading policy is modelled as a Markov decision process, where our objective is to maximize the long-term utility performance whereby an offloading decision is made based on the task queue state, the energy queue state as well as the channel qualities between MU and BSs. To break the curse of high dimensionality in state space, we first propose a double deep Q-network (DQN) based strategic computation offloading algorithm to learn the optimal policy without knowing a priori knowledge of network dynamics. Then motivated by the additive structure of the utility function, a Q-function decomposition technique is combined with the double DQN, which leads to novel learning algorithm for the solving of stochastic computation offloading. Numerical experiments show that our proposed learning algorithms achieve a significant improvement in computation offloading performance compared with the baseline policies
    corecore