2,071 research outputs found

    Quantifying the latency benefits of near-edge and in-network FPGA acceleration

    Get PDF
    Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field Programmable Gate Arrays (FPGAs) is also seeing increased interest due to reduced computation latency and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We consider communication latency including the ingestion of packets for processing on the target platform, showing that this varies significantly with the choice of platform. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources

    Multisite adaptive computation offloading for mobile cloud applications

    Get PDF
    The sheer amount of mobile devices and their fast adaptability have contributed to the proliferation of modern advanced mobile applications. These applications have characteristics such as latency-critical and demand high availability. Also, these kinds of applications often require intensive computation resources and excessive energy consumption for processing, a mobile device has limited computation and energy capacity because of the physical size constraints. The heterogeneous mobile cloud environment consists of different computing resources such as remote cloud servers in faraway data centres, cloudlets whose goal is to bring the cloud closer to the users, and nearby mobile devices that can be utilised to offload mobile tasks. Heterogeneity in mobile devices and the different sites include software, hardware, and technology variations. Resource-constrained mobile devices can leverage the shared resource environment to offload their intensive tasks to conserve battery life and improve the overall application performance. However, with such a loosely coupled and mobile device dominating network, new challenges and problems such as how to seamlessly leverage mobile devices with all the offloading sites, how to simplify deploying runtime environment for serving offloading requests from mobile devices, how to identify which parts of the mobile application to offload and how to decide whether to offload them and how to select the most optimal candidate offloading site among others. To overcome the aforementioned challenges, this research work contributes the design and implementation of MAMoC, a loosely coupled end-to-end mobile computation offloading framework. Mobile applications can be adapted to the client library of the framework while the server components are deployed to the offloading sites for serving offloading requests. The evaluation of the offloading decision engine demonstrates the viability of the proposed solution for managing seamless and transparent offloading in distributed and dynamic mobile cloud environments. All the implemented components of this work are publicly available at the following URL: https://github.com/mamoc-repo

    VirtFogSim: A parallel toolbox for dynamic energy-delay performance testing and optimization of 5G Mobile-Fog-Cloud virtualized platforms

    Get PDF
    It is expected that the pervasive deployment of multi-tier 5G-supported Mobile-Fog-Cloudtechnological computing platforms will constitute an effective means to support the real-time execution of future Internet applications by resource- and energy-limited mobile devices. Increasing interest in this emerging networking-computing technology demands the optimization and performance evaluation of several parts of the underlying infrastructures. However, field trials are challenging due to their operational costs, and in every case, the obtained results could be difficult to repeat and customize. These emergingMobile-Fog-Cloud ecosystems still lack, indeed, customizable software tools for the performance simulation of their computing-networking building blocks. Motivated by these considerations, in this contribution, we present VirtFogSim. It is aMATLAB-supported software toolbox that allows the dynamic joint optimization and tracking of the energy and delay performance of Mobile-Fog-Cloud systems for the execution of applications described by general Directed Application Graphs (DAGs). In a nutshell, the main peculiar features of the proposed VirtFogSim toolbox are that: (i) it allows the joint dynamic energy-aware optimization of the placement of the application tasks and the allocation of the needed computing-networking resources under hard constraints on acceptable overall execution times, (ii) it allows the repeatable and customizable simulation of the resulting energy-delay performance of the overall system; (iii) it allows the dynamic tracking of the performed resource allocation under time-varying operational environments, as those typically featuring mobile applications; (iv) it is equipped with a user-friendly Graphic User Interface (GUI) that supports a number of graphic formats for data rendering, and (v) itsMATLAB code is optimized for running atop multi-core parallel execution platforms. To check both the actual optimization and scalability capabilities of the VirtFogSim toolbox, a number of experimental setups featuring different use cases and operational environments are simulated, and their performances are compared
    • …
    corecore