420 research outputs found

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    Secure FaaS orchestration in the fog: how far are we?

    Get PDF
    AbstractFunction-as-a-Service (FaaS) allows developers to define, orchestrate and run modular event-based pieces of code on virtualised resources, without the burden of managing the underlying infrastructure nor the life-cycle of such pieces of code. Indeed, FaaS providers offer resource auto-provisioning, auto-scaling and pay-per-use billing at no costs for idle time. This makes it easy to scale running code and it represents an effective and increasingly adopted way to deliver software. This article aims at offering an overview of the existing literature in the field of next-gen FaaS from three different perspectives: (i) the definition of FaaS orchestrations, (ii) the execution of FaaS orchestrations in Fog computing environments, and (iii) the security of FaaS orchestrations. Our analysis identify trends and gaps in the literature, paving the way to further research on securing FaaS orchestrations in Fog computing landscapes

    RADON: Rational decomposition and orchestration for serverless computing

    Get PDF
    Emerging serverless computing technologies, such as function as a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying the management of cloud-native services and allowing cost savings through billing and scaling at the level of individual functions. Serverless computing is therefore rapidly shifting the attention of software vendors to the challenge of developing cloud applications deployable on FaaS platforms. In this vision paper, we present the research agenda of the RADON project (http://radon-h2020.eu), which aims to develop a model-driven DevOps framework for creating and managing applications based on serverless computing. RADON applications will consist of fine-grained and independent microservices that can efficiently and optimally exploit FaaS and container technologies. Our methodology strives to tackle complexity in designing such applications, including the solution of optimal decomposition, the reuse of serverless functions as well as the abstraction and actuation of event processing chains, while avoiding cloud vendor lock-in through models

    A Novel Approach for Triggering the Serverless Function in Serverless Environment

    Get PDF
    Serverless computing has gained significant popularity in recent years due to its scalability, cost efficiency, and simplified development process. In a serverless environment, functions are the basic units of computation that are executed on-demand, without the need for provisioning and managing servers. However, efficiently triggering serverless functions remains a challenge, as traditional methodologies often suffer from latency, Time limit and scalability issues and the efficient execution and management of serverless functions heavily rely on effective triggering mechanisms. This research paper explores various design considerations and proposes a novel approach for designing efficient triggering mechanisms in serverless environments. By leveraging our proposed methodology, developers can efficiently trigger serverless functions in a variety of scenarios, including event-driven architectures, data processing pipelines, and web application backend

    Exploring the cost and performance benefits of AWS Step Functions using a data processing pipeline

    Get PDF
    In traditional cloud computing, dedicated hardware is substituted by dynamically allocated, utility-oriented resources such as virtualized servers. While cloud services are following the pay-as-you-go pricing model, resources are billed based on instance allocation and not on the actual usage, leading the customers to be charged needlessly. In serverless computing, as exemplified by the Function-as-a-Service (FaaS) model where functions are the basic resources, functions are typically not allocated or charged until invoked or triggered. Functions are not applications, however, and to build compelling serverless applications they frequently need to be orchestrated with some kind of application logic. A major issue emerging by the use of orchestration is that it complicates further the already complex billing model used by FaaS providers, which in combination with the lack of granular billing and execution details offered by the providers makes the development and evaluation of serverless applications challenging. Towards shedding some light into this matter, in this work we extensively evaluate the state-of-the-art function orchestrator AWS Step Functions (ASF) with respect to its performance and cost. For this purpose we conduct a series of experiments using a serverless data processing pipeline application developed as both ASF Standard and Express workflows. Our results show that Step Functions using Express workflows are economical when running short-lived tasks with many state transitions. In contrast, Standard workflows are better suited for long-running tasks, offering in addition detailed debugging and logging information. However, even if the behavior of the orchestrated AWS Lambda functions influences both types of workflows, Step Functions realized as Express workflows get impacted the most by the phenomena affecting Lambda functions
    • …
    corecore