384 research outputs found

    No more, no less - A formal model for serverless computing

    Get PDF
    Serverless computing, also known as Functions-as-a-Service, is a recent paradigm aimed at simplifying the programming of cloud applications. The idea is that developers design applications in terms of functions, which are then deployed on a cloud infrastructure. The infrastructure takes care of executing the functions whenever requested by remote clients, dealing automatically with distribution and scaling with respect to inbound traffic. While vendors already support a variety of programming languages for serverless computing (e.g. Go, Java, Javascript, Python), as far as we know there is no reference model yet to formally reason on this paradigm. In this paper, we propose the first formal programming model for serverless computing, which combines ideas from both the λ\lambda-calculus (for functions) and the π\pi-calculus (for communication). To illustrate our proposal, we model a real-world serverless system. Thanks to our model, we are also able to capture and pinpoint the limitations of current vendor technologies, proposing possible amendments

    The Servers of Serverless Computing: A Formal Revisitation of Functions as a Service

    Get PDF
    International audienceServerless computing is a paradigm for programming cloud applications in terms of stateless functions, executed and scaled in proportion to inbound requests. Here, we revisit SKC, a calculus capturing the essential features of serverless programming. By exploring the design space of the language, we refined the integration between the fundamental features of the two calculi that inspire SKC: the λ-and the π-calculus. That investigation led us to a revised syntax and semantics, which support an increase in the expressiveness of the language. In particular, now function names are first-class citizens and can be passed around. To illustrate the new features, we present step-by-step examples and two non-trivial use cases from artificial intelligence, which model, respectively, a perceptron and an image tagging system into compositions of serverless functions. We also illustrate how SKC supports reasoning on serverless implementations, i.e., the underlying network of communicating, concurrent, and mobile processes which execute serverless functions in the cloud. To that aim, we show an encoding from SKC to the asynchronous π-calculus and prove it correct in terms of an operational correspondence

    This is not the End: Rethinking Serverless Function Termination

    Full text link
    Elastic scaling is one of the central benefits provided by serverless platforms, and requires that they scale resource up and down in response to changing workloads. Serverless platforms scale-down resources by terminating previously launched instances (which are containers or processes). The serverless programming model ensures that terminating instances is safe assuming all application code running on the instance has either completed or timed out. Safety thus depends on the serverless platform's correctly determining that application processing is complete. In this paper, we start with the observation that current serverless platforms do not account for pending asynchronous I/O operations when determining whether application processing is complete. These platforms are thus unsafe when executing programs that use asynchronous I/O, and incorrectly deciding that application processing has terminated can result in data inconsistency when these platforms are used. We show that the reason for this problem is that current serverless semantics couple termination and response generation in serverless applications. We address this problem by proposing an extension to current semantics that decouples response generation and termination, and demonstrate the efficacy and benefits of our proposal by extending OpenWhisk, an open source serverless platform

    Secure FaaS orchestration in the fog: how far are we?

    Get PDF
    AbstractFunction-as-a-Service (FaaS) allows developers to define, orchestrate and run modular event-based pieces of code on virtualised resources, without the burden of managing the underlying infrastructure nor the life-cycle of such pieces of code. Indeed, FaaS providers offer resource auto-provisioning, auto-scaling and pay-per-use billing at no costs for idle time. This makes it easy to scale running code and it represents an effective and increasingly adopted way to deliver software. This article aims at offering an overview of the existing literature in the field of next-gen FaaS from three different perspectives: (i) the definition of FaaS orchestrations, (ii) the execution of FaaS orchestrations in Fog computing environments, and (iii) the security of FaaS orchestrations. Our analysis identify trends and gaps in the literature, paving the way to further research on securing FaaS orchestrations in Fog computing landscapes
    • …
    corecore