353 research outputs found

    DYVERSE: DYnamic VERtical Scaling in Multi-tenant Edge Environments

    Full text link
    Multi-tenancy in resource-constrained environments is a key challenge in Edge computing. In this paper, we develop 'DYVERSE: DYnamic VERtical Scaling in Edge' environments, which is the first light-weight and dynamic vertical scaling mechanism for managing resources allocated to applications for facilitating multi-tenancy in Edge environments. To enable dynamic vertical scaling, one static and three dynamic priority management approaches that are workload-aware, community-aware and system-aware, respectively are proposed. This research advocates that dynamic vertical scaling and priority management approaches reduce Service Level Objective (SLO) violation rates. An online-game and a face detection workload in a Cloud-Edge test-bed are used to validate the research. The merits of DYVERSE is that there is only a sub-second overhead per Edge server when 32 Edge servers are deployed on a single Edge node. When compared to executing applications on the Edge servers without dynamic vertical scaling, static priorities and dynamic priorities reduce SLO violation rates of requests by up to 4% and 12% for the online game, respectively, and in both cases 6% for the face detection workload. Moreover, for both workloads, the system-aware dynamic vertical scaling method effectively reduces the latency of non-violated requests, when compared to other methods

    Quantifying the latency benefits of near-edge and in-network FPGA acceleration

    Get PDF
    Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field Programmable Gate Arrays (FPGAs) is also seeing increased interest due to reduced computation latency and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We consider communication latency including the ingestion of packets for processing on the target platform, showing that this varies significantly with the choice of platform. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources

    AI-Based Sustainable and Intelligent Offloading Framework for IIoT in Collaborative Cloud-Fog Environments

    Get PDF
    The cloud paradigm is one of the most trending areas in today’s era due to its rich profusion of services. However, it fails to serve the latency-sensitive Industrial Internet of Things (IIoT) applications associated with automotives, robotics, oil and gas, smart communications, Industry 5.0, etc. Hence, to strengthen the capabilities of IIoT, fog computing has emerged as a promising solution for latency-aware IIoT tasks. However, the resource-constrained nature of fog nodes puts forth another substantial issue of offloading decisions in resource management. Therefore, we propose an Artificial Intelligence (AI)-enabled intelligent and sustainable framework for an optimized multi-layered integrated cloud fog-based environment where real-time offloading decisions are accomplished as per the demand of IIoT applications and analyzed by a fuzzy based offloading controller. Moreover, an AI based Whale Optimization Algorithm (WOA) has been incorporated into a framework that promises to search for the best possible resources and make accurate decisions to ameliorate various Quality-of-Service (QoS) parameters. The experimental results show an escalation in makespan time up to 37.17%, energy consumption up to 27.32%, and execution cost up to 13.36% in comparison to benchmark offloading and allocation schemes
    • …
    corecore