21 research outputs found
Beyond Microbenchmarks: The SPEC-RG Vision for a Comprehensive Serverless Benchmark
Serverless computing services, such as Function-as-a-Service (FaaS), hold the attractive promise of a high level of abstraction and high performance, combined with the minimization of operational logic. Several large ecosystems of serverless platforms, both open- and closed-source, aim to realize this promise. Consequently, a lucrative market has emerged. However, the performance trade-offs of these systems are not well-understood. Moreover, it is exactly the high level of abstraction and the opaqueness of the operational-side that make performance evaluation studies of serverless platforms challenging. Learning from the history of IT platforms, we argue that a benchmark for serverless platforms could help address this challenge. We envision a comprehensive serverless benchmark, which we contrast to the narrow focus of prior work in this area. We argue that a comprehensive benchmark will need to take into account more than just runtime overhead, and include notions of cost, realistic workloads, more (open-source) platforms, and cloud integrations. Finally, we show through preliminary real-world experiments how such a benchmark can help compare the performance overhead when running a serverless workload on state-of-the-art platforms
Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges
In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks
Initial recommendations for performing, benchmarking, and reporting single-cell proteomics experiments
Analyzing proteins from single cells by tandem mass spectrometry (MS) has
become technically feasible. While such analysis has the potential to
accurately quantify thousands of proteins across thousands of single cells, the
accuracy and reproducibility of the results may be undermined by numerous
factors affecting experimental design, sample preparation, data acquisition,
and data analysis. Broadly accepted community guidelines and standardized
metrics will enhance rigor, data quality, and alignment between laboratories.
Here we propose best practices, quality controls, and data reporting
recommendations to assist in the broad adoption of reliable quantitative
workflows for single-cell proteomics.Comment: Supporting website: https://single-cell.net/guideline
The Design, Productization, and Evaluation of a Serverless Workflow-Management System
The need for accessible and cost-effective IT resources has led to thenear-universal adoption of cloud computing. Within cloud computing,serverless computing has emerged as a model that further abstracts awayoperational complexity of heterogeneous cloud resources. Central to this form ofcomputing is Function-as-a-Service (FaaS); a cloud model that enablesusers to express applications as functions, further decoupling the applicationlogic from the hardware and other operational concerns. Although FaaS has seenrapid adoption for simple use cases, there are several issues that impede itsuse for more complex use cases. A key issue is the lack of systems thatfacilitate the reuse of existing functions to create more complex, composedfunctions. Current approaches for serverless function composition are eitherproprietary, resource inefficient, unreliable, or do not scale. To address thisissue, we propose an approach to orchestrate composed functions using reliablyand efficiently with workflows. As a prototype, we design and implementFission Workflows: an open-source serverless workflow system which leverages thecharacteristics of serverless functions to improve the (re)usability,performance, and reliability of function compositions. We evaluate our prototypeusing both synthetic and real-world experiments, which show that the system iscomparable with or better than state-of-the-art workflow systems, while costingsignificantly less. Based on the experimental evaluation and the industryinterest in the Fission Workflows product, we believe that serverless workfloworchestration will enable the use of serverless applications for more complexuse cases.Computer Scienc
An analysis of workflow formalisms for workflows with complex non-functional requirements
Cloud and datacenter operators offer progressively more sophisticated service level agreements to customers. The Quality-of-Service guarantees by these operators have started to entail non-functional requirements customers have regarding their applications. At the same time, expressing applications as workflows in datacenters is increasingly more common. Currently, non-functional requirements (NFRs) can only be defined on entire workflows and cannot be changed at runtime, possibly wasting valuable resources. To move towards modifiable NFRs at the task level, there is a need for a formalism capable of expressing this. Existing formalisms do not support this level of granularity or are restricted to a subset of NFRs. In this work, we investigate the current support for NFRs in existing formalisms. Using a library containing workflows with and without NFRs, we inspect the capability of existing formalisms to express these requirements. Additionally, we create and evaluate five metrics to qualitatively and quantitatively compare each formalism. Our main findings are that although current formalisms do not support arbitrary NFRs per-task, the Directed Acyclic Graphs (DAGs) formalism is the most suitable to extend
The spec cloud group’s research vision on faas and serverless architectures
Cloud computing enables an entire ecosystem of developing, composing, and providing IT services. An emerging class of cloud-based software architectures, serverless, focuses on providing software architects the ability to execute arbitrary functions with small overhead in server management, as Function-as-a-service (FaaS). However useful, serverless and FaaS suffer from a community problem that faces every emerging technology, which has indeed also hampered cloud computing a decade ago: lack of clear terminology, and scattered vision about the field. In this work, we address this community problem. We clarify the term serverless, by reducing it to cloud functions as programming units, and a model of executing simple and complex (e.g., workflows of) functions with operations managed primarily by the cloud provider. We propose a research vision, where 4 key directions (perspectives) present 17 technical opportunities and challenges