11 research outputs found
Understanding Cost Dynamics of Serverless Computing: An Empirical Study
The advent of serverless computing has revolutionized the landscape of cloud
computing, offering a new paradigm that enables developers to focus solely on
their applications rather than managing and provisioning the underlying
infrastructure. These applications involve integrating individual functions
into a cohesive workflow for complex tasks. The pay-per-use model and
nontransparent reporting by cloud providers make it difficult to estimate
serverless costs, imped-ing informed business decisions. Existing research
studies on serverless compu-ting focus on performance optimization and state
management, both from empir-ical and technical perspectives. However, the
state-of-the-art shows a lack of em-pirical investigations on the understanding
of the cost dynamics of serverless computing over traditional cloud computing.
Therefore, this study delves into how organizations anticipate the costs of
adopting serverless. It also aims to com-prehend workload suitability and
identify best practices for cost optimization of serverless applications. To
this end, we conducted a qualitative (interviews) study with 15 experts from 8
companies involved in the migration and development of serverless systems. The
findings revealed that, while serverless computing is highly suitable for
unpredictable workloads, it may not be cost-effective for cer-tain high-scale
applications. The study also introduces a taxonomy for comparing the cost of
adopting serverless versus traditional cloud
The Journey to Serverless Migration: An Empirical Analysis of Intentions, Strategies, and Challenges
Serverless is an emerging cloud computing paradigm that facilitates
developers to focus solely on the application logic rather than provisioning
and managing the underlying infrastructure. The inherent characteristics such
as scalability, flexibility, and cost efficiency of serverless computing,
attracted many companies to migrate their legacy applications toward this
paradigm. However, the stateless nature of serverless requires careful
migration planning, consideration of its subsequent implications, and potential
challenges. To this end, this study investigates the intentions, strategies,
and technical and organizational challenges while migrating to a serverless
architecture. We investigated the migration processes of 11 systems across
diverse domains by conducting 15 in-depth interviews with professionals from 11
organizations. we also presented a detailed discussion of each migration case.
Our findings reveal that large enterprises primarily migrate to enhance
scalability and operational efficiency, while smaller organizations intend to
reduce the cost. Furthermore, organizations use a domain-driven design approach
to identify the use case and gradually migrate to serverless using a strangler
pattern. However, migration encounters technical challenges i.e., testing
event-driven architecture, integrating with the legacy system, lack of
standardization, and organizational challenges i.e., mindset change and hiring
skilled serverless developers as a prominent. The findings of this study
provide a comprehensive understanding that can guide future implementations and
advancements in the context of serverless migration
Open-source Serverless Architectures: an Evaluation of Apache OpenWhisk
The serverless computing paradigm ushers in new concepts for running applications and services in the cloud. Currently, commercial solutions dominate the market, though open-source solutions do exist. As a consequence of this, there is little research detailing how well the different open-source solutions perform. In this paper, one such open-source solution, Apache OpenWhisk, is investigated to shed light on the capabilities and limitations inherent of such serverless computing architecture, and principally to provide further research on this particular solution's performance. This is accomplished through an extensive evaluation of OpenWhisk, involving a variety of experiments and benchmarks
Function-as-a-Service Performance Evaluation: A Multivocal Literature Review
Function-as-a-Service (FaaS) is one form of the serverless cloud computing
paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing
event-triggered code snippets (i.e., functions). Many studies that empirically
evaluate the performance of such FaaS platforms have started to appear but we
are currently lacking a comprehensive understanding of the overall domain. To
address this gap, we conducted a multivocal literature review (MLR) covering
112 studies from academic (51) and grey (61) literature. We find that existing
work mainly studies the AWS Lambda platform and focuses on micro-benchmarks
using simple functions to measure CPU speed and FaaS platform overhead (i.e.,
container cold starts). Further, we discover a mismatch between academic and
industrial sources on tested platform configurations, find that function
triggers remain insufficiently studied, and identify HTTP API gateways and
cloud storages as the most used external service integrations. Following
existing guidelines on experimentation in cloud systems, we discover many flaws
threatening the reproducibility of experiments presented in the surveyed
studies. We conclude with a discussion of gaps in literature and highlight
methodological suggestions that may serve to improve future FaaS performance
evaluation studies.Comment: improvements including postprint update
Adapting Microservices in the Cloud with FaaS
This project involves benchmarking, microservices and Function-as-a-service (FaaS) across the dimensions of performance and cost. In order to do a comparison this paper proposes a benchmark framework