337 research outputs found

    Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics.Objective. The goal of this licentiate thesis is to measure and understand performance in IaaS and FaaS clouds. My PhD thesis will extend and leverage this understanding to propose solutions for building performance-optimized FaaS cloud applications.Method.\ua0To achieve this goal, quantitative and qualitative research methods are used, including experimental research, artifact analysis, and literature review.Findings.\ua0The thesis proposes a cloud benchmarking methodology to estimate application performance in IaaS clouds, characterizes typical FaaS applications, identifies gaps in literature on FaaS performance evaluations, and examines the reproducibility of reported FaaS performance experiments. The evaluation of the benchmarking methodology yielded promising results for benchmark-based application performance estimation under selected conditions. Characterizing 89 FaaS applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that the majority of studies do not follow reproducibility principles on cloud experimentation.Future Work. Future work will propose a suite of application performance benchmarks for FaaS, which is instrumental for evaluating candidate solutions towards building performance-optimized FaaS applications

    Implementation of DevOps pipeline for Serverless Applications

    Get PDF
    Serverless computing is a cloud computing execution model where server-side logic runs in the stateless compute containers that are event-triggered and usually fully managed by vendor hosts such as AWS Lambda. This approach is also called Function as a Service (FaaS). Applications that rely on this approach are called Serverless applications. Serverless usage promises infrastructure cost reduction and automatic scalability. One more important benefit of serverless is making the operations part of DevOps process simpler. It reduces the time on the management and maintenance of the servers and sometimes makes them even completely unnecessary. Despite this fact, applications using serverless computing model require a new look at DevOps automation practices since it is a new approach to software architecture design and software development workflow. The goal of this thesis is to implement DevOps pipeline for a Serverless application within a single case organization and evaluate the results of implementation. This is done through design science research, where result artifact is a release pipeline designed and implemented according to the requirements for a new project in the case organization. The result of the study is automated DevOps pipeline with implemented Continuous Integration (CI), Continuous Delivery (CD) and Monitoring practices required for the case project. The research shows that architecture of Serverless applications affects many DevOps automation practices such as test execution, deployment and monitoring of the application. It also affects the decisions about source code repositories structure, mocking libraries and Infrastructure as Code (IaC) tools

    Performance Evaluation of Serverless Applications and Infrastructures

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern web-based software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new serverless services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Measuring these characteristics is difficult in dynamic cloud environments due to performance variability in large-scale distributed systems with limited observability.Objective. This thesis aims to enable reproducible performance evaluation of serverless applications and their underlying cloud infrastructure.Method. A combination of literature review and empirical research established a consolidated view on serverless applications and their performance. New solutions were developed through engineering research and used to conduct performance benchmarking field experiments in cloud environments.Findings. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that most studies do not follow reproducibility principles on cloud experimentation. Characterizing 89 serverless applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. A novel trace-based serverless application benchmark shows that external service calls often dominate the median end-to-end latency and cause long tail latency. The latency breakdown analysis further identifies performance challenges of serverless applications, such as long delays through asynchronous function triggers, substantial runtime initialization for coldstarts, increased performance variability under bursty workloads, and heavily provider-dependent performance characteristics. The evaluation of different cloud benchmarking methodologies has shown that only selected micro-benchmarks are suitable for estimating application performance, performance variability depends on the resource type, and batch testing on the same instance with repetitions should be used for reliable performance testing.Conclusions. The insights of this thesis can guide practitioners in building performance-optimized serverless applications and researchers in reproducibly evaluating cloud performance using suitable execution methodologies and different benchmark types

    Function-as-a-Service for the Cloud-to-Thing Continuum: A Systematic Mapping Study

    Get PDF
    Until recently, Internet of Things applications were mainly seen as a means to gather sensor data for further processing in the Cloud. Nowadays, with the advent of Edge and Fog Computing, digital services are dragged closer to the physical world, with data processing and storage tasks distributed across the whole Cloud-to-Thing continuum. Function-as-a-Service (FaaS) is gaining momentum as one of the promising programming models for such digital services. This work investigates the current research landscape of applying FaaS over the Cloud-to-Thing continuum. In particular, we investigate the support offered by existing FaaS platforms for the deployment, placement, orchestration, and execution of functions across the whole continuum using the Systematic Mapping Study methodology. We selected 33 primary studies and analyzed their data, bringing a broad view on the current research landscape in the area.acceptedVersio

    Edge Offloading in Smart Grid

    Full text link
    The energy transition supports the shift towards more sustainable energy alternatives, paving towards decentralized smart grids, where the energy is generated closer to the point of use. The decentralized smart grids foresee novel data-driven low latency applications for improving resilience and responsiveness, such as peer-to-peer energy trading, microgrid control, fault detection, or demand response. However, the traditional cloud-based smart grid architectures are unable to meet the requirements of the new emerging applications such as low latency and high-reliability thus alternative architectures such as edge, fog, or hybrid need to be adopted. Moreover, edge offloading can play a pivotal role for the next-generation smart grid AI applications because it enables the efficient utilization of computing resources and addresses the challenges of increasing data generated by IoT devices, optimizing the response time, energy consumption, and network performance. However, a comprehensive overview of the current state of research is needed to support sound decisions regarding energy-related applications offloading from cloud to fog or edge, focusing on smart grid open challenges and potential impacts. In this paper, we delve into smart grid and computational distribution architec-tures, including edge-fog-cloud models, orchestration architecture, and serverless computing, and analyze the decision-making variables and optimization algorithms to assess the efficiency of edge offloading. Finally, the work contributes to a comprehensive understanding of the edge offloading in smart grid, providing a SWOT analysis to support decision making.Comment: to be submitted to journa

    Rise of the Planet of Serverless Computing: A Systematic Review

    Get PDF
    Serverless computing is an emerging cloud computing paradigm, being adopted to develop a wide range of software applications. It allows developers to focus on the application logic in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, its unique characteristic poses new challenges to the development and deployment of serverless-based applications. To tackle these challenges, enormous research efforts have been devoted. This paper provides a comprehensive literature review to characterize the current research state of serverless computing. Specifically, this paper covers 164 papers on 17 research directions of serverless computing, including performance optimization, programming framework, application migration, multi-cloud development, testing and debugging, etc. It also derives research trends, focus, and commonly-used platforms for serverless computing, as well as promising research opportunities

    Towards Secure and Scalable Tag Search approaches for Current and Next Generation RFID Systems

    Get PDF
    The technology behind Radio Frequency Identification (RFID) has been around for a while, but dropping tag prices and standardization efforts are finally facilitating the expansion of RFID systems. The massive adoption of this technology is taking us closer to the well known ubiquitous computing scenarios. However, the widespread deployment of RFID technology also gives rise to significant user security issues. One possible solution to these challenges is the use of secure authentication protocols to protect RFID communications. A natural extension of RFID authentication is RFID tag searching, where a reader needs to search for a particular RFID tag out of a large collection of tags. As the number of tags of the system increases, the ability to search for the tags is invaluable when the reader requires data from a few tags rather than all the tags of the system. Authenticating each tag one at a time until the desired tag is found is a time consuming process. Surprisingly, RFID search has not been widely addressed in the literature despite the availability of search capabilities in typical RFID tags. In this thesis, we examine the challenges of extending security and scalability issues to RFID tag search and suggest several solutions. This thesis aims to design RFID tag search protocols that ensure security and scalability using lightweight cryptographic primitives. We identify the security and performance requirements for RFID systems. We also point out and explain the major attacks that are typically launched against an RFID system. This thesis makes four main contributions. First, we propose a serverless (without a central server) and untraceable search protocol that is secure against major attacks we identified earlier. The unique feature of this protocol is that it provides security protection and searching capacity same as an RFID system with a central server. In addition, this approach is no more vulnerable to a single point-of-failure. Second, we propose a scalable tag search protocol that provides most of the identified security and performance features. The highly scalable feature of this protocol allows it to be deployed in large scale RFID systems. Third, we propose a hexagonal cell based distributed architecture for efficient RFID tag searching in an emergency evacuation system. Finally, we introduce tag monitoring as a new dimension of tag searching and propose a Slotted Aloha based scalable tag monitoring protocol for next generation WISP (Wireless Identification and Sensing Platform) tags

    Function-as-a-Service Performance Evaluation: A Multivocal Literature Review

    Get PDF
    Function-as-a-Service (FaaS) is one form of the serverless cloud computing paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing event-triggered code snippets (i.e., functions). Many studies that empirically evaluate the performance of such FaaS platforms have started to appear but we are currently lacking a comprehensive understanding of the overall domain. To address this gap, we conducted a multivocal literature review (MLR) covering 112 studies from academic (51) and grey (61) literature. We find that existing work mainly studies the AWS Lambda platform and focuses on micro-benchmarks using simple functions to measure CPU speed and FaaS platform overhead (i.e., container cold starts). Further, we discover a mismatch between academic and industrial sources on tested platform configurations, find that function triggers remain insufficiently studied, and identify HTTP API gateways and cloud storages as the most used external service integrations. Following existing guidelines on experimentation in cloud systems, we discover many flaws threatening the reproducibility of experiments presented in the surveyed studies. We conclude with a discussion of gaps in literature and highlight methodological suggestions that may serve to improve future FaaS performance evaluation studies.Comment: improvements including postprint update
    • …
    corecore