755 research outputs found

    Distributed and Application-aware Task Scheduling in Edge-clouds

    Full text link
    Edge computing is an emerging technology which places computing at the edge of the network to provide an ultra-low latency. Computation offloading, a paradigm that migrates computing from mobile devices to remote servers, can now use the power of edge computing by offloading computation to cloudlets in edge-clouds. However, the task scheduling of computation offloading in edge-clouds faces a two-fold challenge. First, as cloudlets are geographically distributed, it is difficult for each cloudlet to perform load balancing without centralized control. Second, as tasks of computation offloading have a wide variety of types, to guarantee the user quality of experience (QoE) in terms of task types is challenging. In this paper, we present Petrel, a distributed and application-aware task scheduling framework for edge-clouds. Petrel implements a sample-based load balancing technology and further adopts adaptive scheduling policies according to task types. This application-aware scheduling not only provides QoE guarantee but also improves the overall scheduling performance. Trace-driven simulations show that Petrel achieves a significant improvement over existing scheduling strategies

    Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence

    Full text link
    Along with the rapid developments in communication technologies and the surge in the use of mobile devices, a brand-new computation paradigm, Edge Computing, is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications are thriving with the breakthroughs in deep learning and the many improvements in hardware architectures. Billions of data bytes, generated at the network edge, put massive demands on data processing and structural optimization. Thus, there exists a strong demand to integrate Edge Computing and AI, which gives birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial Intelligence on Edge). The former focuses on providing more optimal solutions to key problems in Edge Computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on the edge. This paper provides insights into this new inter-disciplinary field from a broader perspective. It discusses the core concepts and the research road-map, which should provide the necessary background for potential future research initiatives in Edge Intelligence.Comment: 13 pages, 3 figure

    Service Capacity Enhanced Task Offloading and Resource Allocation in Multi-Server Edge Computing Environment

    Full text link
    An edge computing environment features multiple edge servers and multiple service clients. In this environment, mobile service providers can offload client-side computation tasks from service clients' devices onto edge servers to reduce service latency and power consumption experienced by the clients. A critical issue that has yet to be properly addressed is how to allocate edge computing resources to achieve two optimization objectives: 1) minimize the service cost measured by the service latency and the power consumption experienced by service clients; and 2) maximize the service capacity measured by the number of service clients that can offload their computation tasks in the long term. This paper formulates this long-term problem as a stochastic optimization problem and solves it with an online algorithm based on Lyapunov optimization. This NP-hard problem is decomposed into three sub-problems, which are then solved with a suite of techniques. The experimental results show that our approach significantly outperforms two baseline approaches.Comment: This paper has been accepted by Early Submission Phase of ICWS201

    Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions

    Full text link
    The Internet of Things (IoT) paradigm is being rapidly adopted for the creation of smart environments in various domains. The IoT-enabled Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry 4.0 and Agtech handle a huge volume of data and require data processing services from different types of applications in real-time. The Cloud-centric execution of IoT applications barely meets such requirements as the Cloud datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog computing}, an extension of Cloud at the edge network, can execute these applications closer to data sources. Thus, Fog computing can improve application service delivery time and resist network congestion. However, the Fog nodes are highly distributed, heterogeneous and most of them are constrained in resources and spatial sharing. Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes. In this work, we investigate the existing application management strategies in Fog computing and review them in terms of architecture, placement and maintenance. Additionally, we propose a comprehensive taxonomy and highlight the research gaps in Fog-based application management. We also discuss a perspective model and provide future research directions for further improvement of application management in Fog computing

    A Survey on Mobile Edge Networks: Convergence of Computing, Caching and Communications

    Full text link
    As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures which bring network functions and contents to the network edge are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks including definition, architecture and advantages. Next, a comprehensive survey of issues on computing, caching and communication techniques at the network edge is presented respectively. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks such as cloud technology, SDN/NFV and smart devices are discussed. Finally, open research challenges and future directions are presented as well

    Aqua Computing: Coupling Computing and Communications

    Full text link
    The authors introduce a new vision for providing computing services for connected devices. It is based on the key concept that future computing resources will be coupled with communication resources, for enhancing user experience of the connected users, and also for optimising resources in the providers' infrastructures. Such coupling is achieved by Joint/Cooperative resource allocation algorithms, by integrating computing and communication services and by integrating hardware in networks. Such type of computing, by which computing services are not delivered independently but dependent of networking services, is named Aqua Computing. The authors see Aqua Computing as a novel approach for delivering computing resources to end devices, where computing power of the devices are enhanced automatically once they are connected to an Aqua Computing enabled network. The process of resource coupling is named computation dissolving. Then, an Aqua Computing architecture is proposed for mobile edge networks, in which computing and wireless networking resources are allocated jointly or cooperatively by a Mobile Cloud Controller, for the benefit of the end-users and/or for the benefit of the service providers. Finally, a working prototype of the system is shown and the gathered results show the performance of the Aqua Computing prototype.Comment: A shorter version of this paper will be submitted to an IEEE magazin

    Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

    Full text link
    This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper

    A Dynamic Service-Migration Mechanism in Edge Cognitive Computing

    Full text link
    Driven by the vision of edge computing and the success of rich cognitive services based on artificial intelligence, a new computing paradigm, edge cognitive computing (ECC), is a promising approach that applies cognitive computing at the edge of the network. ECC has the potential to provide the cognition of users and network environmental information, and further to provide elastic cognitive computing services to achieve a higher energy efficiency and a higher Quality of Experience (QoE) compared to edge computing. This paper firstly introduces our architecture of the ECC and then describes its design issues in detail. Moreover, we propose an ECC-based dynamic service migration mechanism to provide an insight into how cognitive computing is combined with edge computing. In order to evaluate the proposed mechanism, a practical platform for dynamic service migration is built up, where the services are migrated based on the behavioral cognition of a mobile user. The experimental results show that the proposed ECC architecture has ultra-low latency and a high user experience, while providing better service to the user, saving computing resources, and achieving a high energy efficiency

    Decentralized Computation Offloading for Multi-User Mobile Edge Computing: A Deep Reinforcement Learning Approach

    Full text link
    Mobile edge computing (MEC) emerges recently as a promising solution to relieve resource-limited mobile devices from computation-intensive tasks, which enables devices to offload workloads to nearby MEC servers and improve the quality of computation experience. Nevertheless, by considering a MEC system consisting of multiple mobile users with stochastic task arrivals and wireless channels in this paper, the design of computation offloading policies is challenging to minimize the long-term average computation cost in terms of power consumption and buffering delay. A deep reinforcement learning (DRL) based decentralized dynamic computation offloading strategy is investigated to build a scalable MEC system with limited feedback. Specifically, a continuous action space-based DRL approach named deep deterministic policy gradient (DDPG) is adopted to learn efficient computation offloading policies independently at each mobile user. Thus, powers of both local execution and task offloading can be adaptively allocated by the learned policies from each user's local observation of the MEC system. Numerical results are illustrated to demonstrate that efficient policies can be learned at each user, and performance of the proposed DDPG based decentralized strategy outperforms the conventional deep Q-network (DQN) based discrete power control strategy and some other greedy strategies with reduced computation cost. Besides, the power-delay tradeoff is also analyzed for both the DDPG based and DQN based strategies

    UAV-aided urban target tracking system based on edge computing

    Full text link
    Target tracking is an important issue of social security. In order to track a target, traditionally a large amount of surveillance video data need to be uploaded into the cloud for processing and analysis, which put stremendous bandwidth pressure on communication links in access networks and core networks. At the same time, the long delay in wide area network is very likely to cause a tracking system to lose its target. Often, unmanned aerial vehicle (UAV) has been adopted for target tracking due to its flexibility, but its limited flight time due to battery constraint and the blocking by various obstacles in the field pose two major challenges to its target tracking task, which also very likely results in the loss of target. A novel target tracking model that coordinates the tracking by UAV and ground nodes in an edge computing environment is proposed in this study. The model can effectively reduce the communication cost and the long delay of the traditional surveillance camera system that relies on cloud computing, and it can improve the probability of finding a target again after an UAV loses the tracing of that target. It has been demonstrated that the proposed system achieved a significantly better performance in terms of low latency, high reliability, and optimal quality of experience (QoE)
    corecore