7 research outputs found

    Function-as-a-Service for the Cloud-to-Thing Continuum: A Systematic Mapping Study

    Get PDF
    Until recently, Internet of Things applications were mainly seen as a means to gather sensor data for further processing in the Cloud. Nowadays, with the advent of Edge and Fog Computing, digital services are dragged closer to the physical world, with data processing and storage tasks distributed across the whole Cloud-to-Thing continuum. Function-as-a-Service (FaaS) is gaining momentum as one of the promising programming models for such digital services. This work investigates the current research landscape of applying FaaS over the Cloud-to-Thing continuum. In particular, we investigate the support offered by existing FaaS platforms for the deployment, placement, orchestration, and execution of functions across the whole continuum using the Systematic Mapping Study methodology. We selected 33 primary studies and analyzed their data, bringing a broad view on the current research landscape in the area.acceptedVersio

    Network-Aware Task Scheduling for Edge Computing

    Get PDF
    Edge computing promises low-latency computation by moving data processing closer to the source. Tasks executed at the edge of the network have seen a significant increase in their complexity. The demand for low-latency computation for delay-sensitive applications at the edge is also increasing. To meet the computational demand, task offloading has become a go-to solution where the edge devices offload tasks in part or whole to the edge servers via the network. But the performance fluctuations of the network largely influence the data transfer performance between edge devices and the edge servers, which negatively impacts the overall task execution performance. Hence, monitoring the state of the network is desirable to improve the performance of task offloading at the edge. However, networks are usually dynamic and unpredictable in nature, particularly when the network is being used by multiple other devices and applications simultaneously, resulting in data flows competing with each other for the resources. In this study, we are leveraging In-­band Network Telemetry (INT) to collect fine-grained network information to introduce network awareness in task scheduling for edge computing. Legacy methods of network monitoring that rely on flow-level and port-level statistics are often limited by their collection frequency which is typically in the order of tens of seconds. In contrast, INT can improve the collection frequency by working at the line rate and granularity of information by capturing network telemetry at packet-level directly from the data plane. Such capabilities enable the detection of subtle changes and congestion events in the network, thereby increasing the network visibility while making it more accurate. We implemented a network-aware task scheduler for edge computing that uses high-precision network telemetry for task scheduling. We experimented with different workloads under various congestion scenarios to assess the impact of our network-aware scheduler on the task offloading performance. We observed up to 40% reduction in data transfer time and up to 30% reduction in the overall task execution time by favoring edge servers in uncongested or relatively less congested areas of the network when scheduling the tasks. Our study shows that network visibility is an important factor that can improve task offloading performance. The results so obtained supports our motivation to use INT for obtaining fine-grained high-precision network telemetry to create a network-aware task scheduler for edge computing

    Secure FaaS orchestration in the fog: how far are we?

    Get PDF
    AbstractFunction-as-a-Service (FaaS) allows developers to define, orchestrate and run modular event-based pieces of code on virtualised resources, without the burden of managing the underlying infrastructure nor the life-cycle of such pieces of code. Indeed, FaaS providers offer resource auto-provisioning, auto-scaling and pay-per-use billing at no costs for idle time. This makes it easy to scale running code and it represents an effective and increasingly adopted way to deliver software. This article aims at offering an overview of the existing literature in the field of next-gen FaaS from three different perspectives: (i) the definition of FaaS orchestrations, (ii) the execution of FaaS orchestrations in Fog computing environments, and (iii) the security of FaaS orchestrations. Our analysis identify trends and gaps in the literature, paving the way to further research on securing FaaS orchestrations in Fog computing landscapes

    Latency and resource consumption analysis for serverless edge analytics

    Get PDF
    The serverless computing model, implemented by Function as a Service (FaaS) platforms, can offer several advantages for the deployment of data analytics solutions in IoT environments, such as agile and on-demand resource provisioning, automatic scaling, high elasticity, infrastructure management abstraction, and a fine-grained cost model. Nonetheless, in case of applications with strict latency requirements, the cold start problem in FaaS platforms can represent an important drawback. The most common techniques to alleviate this problem, mainly based on instance pre-warming and instance reusing mechanisms, are usually not well adapted to different application profiles and, in general, can entail an extra expense of resources. In this work, we analyze the effect of instance pre-warming and instance reusing on both, application latency (response time) and resource consumption, for a typical data analytics use case (a machine learning application for image classification) with different input data patterns. Furthermore, we propose to extend the classical centralized cloud-based serverless FaaS platform to a two-tier distributed edge-cloud platform to bring the platform closer to the data source and reduce network latencies

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    Towards a serverless platform for edge computing

    No full text
    The emergence of real-time and data-intensive applications empowered by mobile computing and IoT devices is challenging the success of centralized data centers, and fostering the adoption of the paradigm of fog/edge computing. Differently from cloud data centers, fog nodes are geographically distributed in proximity to data prosumers, taking advantage of the emerging wireless communication technologies and mobile networks. The limited resources of densely distributed fog nodes call for their efficient use by hosted applications and services. To address this challenge, and the needs of different application scenarios, this paper proposes a serverless platform for edge computing. It starts motivating the adoption of a serverless architecture. Then, it presents the services and mechanisms that are the building blocks of a extit{Serverless Edge Platform}. The paper also proposes a prototype platform and its assessment. Obtained results demonstrate the feasibility of the proposed solution for satisfying different application requirements in diverse deployment configurations of heterogeneous fog nodes
    corecore