8,900 research outputs found

    Joint Computation Offloading and Prioritized Scheduling in Mobile Edge Computing

    Get PDF
    With the rapid development of smart phones, enormous amounts of data are generated and usually require intensive and real-time computation. Nevertheless, quality of service (QoS) is hardly to be met due to the tension between resourcelimited (battery, CPU power) devices and computation-intensive applications. Mobileedge computing (MEC) emerging as a promising technique can be used to copy with stringent requirements from mobile applications. By offloading computationally intensive workloads to edge server and applying efficient task scheduling, energy cost of mobiles could be significantly reduced and therefore greatly improve QoS, e.g., latency. This paper proposes a joint computation offloading and prioritized task scheduling scheme in a multi-user mobile-edge computing system. We investigate an energy minimizing task offloading strategy in mobile devices and develop an effective priority-based task scheduling algorithm with edge server. The execution time, energy consumption, execution cost, and bonus score against both the task data sizes and latency requirement is adopted as the performance metric. Performance evaluation results show that, the proposed algorithm significantly reduce task completion time, edge server VM usage cost, and improve QoS in terms of bonus score. Moreover, dynamic prioritized task scheduling is also discussed herein, results show dynamic thresholds setting realizes the optimal task scheduling. We believe that this work is significant to the emerging mobile-edge computing paradigm, and can be applied to other Internet of Things (IoT)-Edge applications

    JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution

    Full text link
    Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i) how to find the best partition of a deep structure; ii) how to deploy the component at an edge device that only has limited computation power; and iii) how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1) A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3) An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE

    DFCV: A Novel Approach for Message Dissemination in Connected Vehicles using Dynamic Fog

    Full text link
    Vehicular Ad-hoc Network (VANET) has emerged as a promising solution for enhancing road safety. Routing of messages in VANET is challenging due to packet delays arising from high mobility of vehicles, frequently changing topology, and high density of vehicles, leading to frequent route breakages and packet losses. Previous researchers have used either mobility in vehicular fog computing or cloud computing to solve the routing issue, but they suffer from large packet delays and frequent packet losses. We propose Dynamic Fog for Connected Vehicles (DFCV), a fog computing based scheme which dynamically creates, increments and destroys fog nodes depending on the communication needs. The novelty of DFCV lies in providing lower delays and guaranteed message delivery at high vehicular densities. Simulations were conducted using hybrid simulation consisting of ns-2, SUMO, and Cloudsim. Results show that DFCV ensures efficient resource utilization, lower packet delays and losses at high vehicle densities

    Wearable Communications in 5G: Challenges and Enabling Technologies

    Full text link
    As wearable devices become more ingrained in our daily lives, traditional communication networks primarily designed for human being-oriented applications are facing tremendous challenges. The upcoming 5G wireless system aims to support unprecedented high capacity, low latency, and massive connectivity. In this article, we evaluate key challenges in wearable communications. A cloud/edge communication architecture that integrates the cloud radio access network, software defined network, device to device communications, and cloud/edge technologies is presented. Computation offloading enabled by this multi-layer communications architecture can offload computation-excessive and latency-stringent applications to nearby devices through device to device communications or to nearby edge nodes through cellular or other wireless technologies. Critical issues faced by wearable communications such as short battery life, limited computing capability, and stringent latency can be greatly alleviated by this cloud/edge architecture. Together with the presented architecture, current transmission and networking technologies, including non-orthogonal multiple access, mobile edge computing, and energy harvesting, can greatly enhance the performance of wearable communication in terms of spectral efficiency, energy efficiency, latency, and connectivity.Comment: This work has been accepted by IEEE Vehicular Technology Magazin

    Next Generation Cloud Computing: New Trends and Research Directions

    Get PDF
    The landscape of cloud computing has significantly changed over the last decade. Not only have more providers and service offerings crowded the space, but also cloud infrastructure that was traditionally limited to single provider data centers is now evolving. In this paper, we firstly discuss the changing cloud infrastructure and consider the use of infrastructure from multiple providers and the benefit of decentralising computing away from data centers. These trends have resulted in the need for a variety of new computing architectures that will be offered by future cloud infrastructure. These architectures are anticipated to impact areas, such as connecting people and devices, data-intensive computing, the service space and self-learning systems. Finally, we lay out a roadmap of challenges that will need to be addressed for realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
    • …
    corecore