139 research outputs found

    Function-as-a-Service Performance Evaluation: A Multivocal Literature Review

    Get PDF
    Function-as-a-Service (FaaS) is one form of the serverless cloud computing paradigm and is defined through FaaS platforms (e.g., AWS Lambda) executing event-triggered code snippets (i.e., functions). Many studies that empirically evaluate the performance of such FaaS platforms have started to appear but we are currently lacking a comprehensive understanding of the overall domain. To address this gap, we conducted a multivocal literature review (MLR) covering 112 studies from academic (51) and grey (61) literature. We find that existing work mainly studies the AWS Lambda platform and focuses on micro-benchmarks using simple functions to measure CPU speed and FaaS platform overhead (i.e., container cold starts). Further, we discover a mismatch between academic and industrial sources on tested platform configurations, find that function triggers remain insufficiently studied, and identify HTTP API gateways and cloud storages as the most used external service integrations. Following existing guidelines on experimentation in cloud systems, we discover many flaws threatening the reproducibility of experiments presented in the surveyed studies. We conclude with a discussion of gaps in literature and highlight methodological suggestions that may serve to improve future FaaS performance evaluation studies.Comment: improvements including postprint update

    Performance Evaluation of Serverless Applications and Infrastructures

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern web-based software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new serverless services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Measuring these characteristics is difficult in dynamic cloud environments due to performance variability in large-scale distributed systems with limited observability.Objective. This thesis aims to enable reproducible performance evaluation of serverless applications and their underlying cloud infrastructure.Method. A combination of literature review and empirical research established a consolidated view on serverless applications and their performance. New solutions were developed through engineering research and used to conduct performance benchmarking field experiments in cloud environments.Findings. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that most studies do not follow reproducibility principles on cloud experimentation. Characterizing 89 serverless applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. A novel trace-based serverless application benchmark shows that external service calls often dominate the median end-to-end latency and cause long tail latency. The latency breakdown analysis further identifies performance challenges of serverless applications, such as long delays through asynchronous function triggers, substantial runtime initialization for coldstarts, increased performance variability under bursty workloads, and heavily provider-dependent performance characteristics. The evaluation of different cloud benchmarking methodologies has shown that only selected micro-benchmarks are suitable for estimating application performance, performance variability depends on the resource type, and batch testing on the same instance with repetitions should be used for reliable performance testing.Conclusions. The insights of this thesis can guide practitioners in building performance-optimized serverless applications and researchers in reproducibly evaluating cloud performance using suitable execution methodologies and different benchmark types

    Smart Tagging System for Diving Equipment

    Get PDF
    The use of Near Field Communication (NFC) has revolutionized many industries through digitalization. This process of digital immersion has been further accelerated through the mainstream availability of NFC-enabled devices and the substantial decline in the cost of NFC smart tags. The purpose of this thesis was to design and implement an end-to-end, smart tagging solution for diving equipment. The project involved an Android application, an AngularJS web application and the backend was developed using Amazon Web Services (AWS). A server-less architecture using AWS micro services was employed in the project. The Android application is used to register NFC tags by writing and reading data from NFC tags and communicating with the backend through a RESTful API. The AngularJS application provides access to the corresponding data. In addition, user authentication is achieved by using Google as an Identity Provider (IdP). This document provides an overview of the steps necessary to implement and integrate applications running on different platforms with AWS services, in a cost-effective and scalable manner. Even though this document addresses topics relevant to a specific project, most of the implementation and design instructions can be used to serve other use-cases, particularly by startups. Since the project involves applications developed on different platforms, only the most important aspects of the process are presented throughout this document.Lyhyen kantaman tiedonsiirron (NFC:n) käyttö on mullistanut monia teollisuuden aloja digitalisoinnin kautta. Näiden digitaalisen upotuksien prosessi on kiihtynyt entisestään, NFC yhteensopivien laitteiden ja saatavuuden noustessa. Prosessi myös supistaa toimintotunnisteiden kustannuksia merkittävästi. Tämän opinnäytetyön tarkoituksena on suunnitella ja toteuttaa päästä päähän toimintotunnisteratkaisu ajovarusteisiin. Projektiin sisältyy Android sovellus, An-gularJS web sovellus ja back end on kehitetty käyttäen Amazon Web Serviceä (AWS). AWS micro palveluja käytetään projektissa palvelimettoman arkkiteh-tuurin avulla. Android sovellusta käytetään NFC-tunnisteien rekisteröimiseen dataa kirjoitta-malla ja lukemalla niitä NFC-tunnisteesta sekä kommunikoimalla back endiin RESTfulAPI:n kautta. AngularJS sovellus tarjoaa pääsyn vastaavaan tietoon. Lisäksi käyttäjän todennus saavutetaan käyttämällä Googlen Identity Provideria (idP). Tässä dokumentissa on yleiskatsaus tarvittavista toimenpiteistä, joilla toteutus ja integrointi pystytään tekemään, eri alustoilla käynnissä olevilla prosesseilla AWS palveluissa.kustannustehokkaasti ja mitattavissa olevilla tasoilla. Vaikka tässä asi-akirjassa käsitellään tiettyyn projektiin liittyviä aiheita, useimpia toteutus- ja suun-nitteluohjeita voidaan myös soveltaa muihin käyttötarkoituksiin, erityisesti startup ideoille. Koska projekti sisältää sovelluksia, jotka on kehitetty eri alustoille, ainoastaan tärkeimmät prosessin näkökohdat on esitetty dokumentissa

    Towards Measuring and Understanding Performance in Infrastructure- and Function-as-a-Service Clouds

    Get PDF
    Context. Cloud computing has become the de facto standard for deploying modern software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics.Objective. The goal of this licentiate thesis is to measure and understand performance in IaaS and FaaS clouds. My PhD thesis will extend and leverage this understanding to propose solutions for building performance-optimized FaaS cloud applications.Method.\ua0To achieve this goal, quantitative and qualitative research methods are used, including experimental research, artifact analysis, and literature review.Findings.\ua0The thesis proposes a cloud benchmarking methodology to estimate application performance in IaaS clouds, characterizes typical FaaS applications, identifies gaps in literature on FaaS performance evaluations, and examines the reproducibility of reported FaaS performance experiments. The evaluation of the benchmarking methodology yielded promising results for benchmark-based application performance estimation under selected conditions. Characterizing 89 FaaS applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that the majority of studies do not follow reproducibility principles on cloud experimentation.Future Work. Future work will propose a suite of application performance benchmarks for FaaS, which is instrumental for evaluating candidate solutions towards building performance-optimized FaaS applications

    Model-Driven Machine Learning for Predictive Cloud Auto-scaling

    Get PDF
    Cloud provisioning of resources requires continuous monitoring and analysis of the workload on virtual computing resources. However, cloud providers offer the rule-based and schedule-based auto-scaling service. Auto-scaling is a cloud system that reacts to real-time metrics and adjusts service instances based on predefined scaling policies. The challenge of this reactive approach to auto-scaling is to cope with fluctuating load changes. For data management applications, the workload is changing and needs forecasting on historical trends and integrating with auto-scaling service. We aim to discover changes and patterns on multi metrics of resource usages of CPU, memory, and networking. To address this problem, the learning-and-inference based prediction has been adopted to predict the needs prior to provision action. First, we develop a novel machine learning-based auto-scaling process that covers the technique of learning multiple metrics for cloud auto-scaling decision. This technique is used for continuous model training and workload forecasting. Furthermore, the result of workload forecasting triggers the auto-scaling process automatically. Also, we build the serverless functions of this machine learning-based process, including monitoring, machine learning, model selection, scheduling as microservices and orchestrating these independent services by platform, language orthogonal APIs. We demonstrate this architectural implementation on AWS and Microsoft Azure and show the prediction results from machine learning on-the-fly. Results show significant cost reductions by our proposed solution compared to a general threshold-based auto-scaling. Still, there is a need to integrate the machine learning prediction with the auto-scaling system. So, the deployment effort of devising additional machine learning components is increased. So, we present a model-driven framework that defines first-class entities to represent machine learning algorithm types, inputs, outputs, parameters, and evaluation scores. We set up rules for validating machine learning entities. The connection between the machine learning and auto-scaling system is presented by two levels of abstraction models, namely cloud platform independent model and cloud platform specific model. We automate the model-to-model transformation and model-to-deployment transformation. We integrate model-driven with a DevOps approach to make models deployable and executable on a target cloud platform. We demonstrate our method with scaling configuration and deployment of two open source benchmark applications - Dell DVD store and Netflix (NDBench) on three cloud platforms, AWS, Azure, and Rackspace. The evaluation shows our inference-based auto-scaling with model-driven reduces approximately 27% of deployment effort compared to the ordinary auto-scaling

    Web-IDE for Low-Code Development in OutSystems

    Get PDF
    Due to the growing popularity of cloud computing and its numerous benefits, many desktop applications have been, and will continue to be, migrated into the cloud and made available through the web. These applications can then be accessed through any device that has access to a browser and internet connection, eliminating the need for installation or managing dependencies. Moreover, the process of introduction to the product is much simpler, faster and collaboration aspects are facilitated. OutSystems is a company that provides software that enables, through an Integrated Development Environment (IDE) and a specific Low-Code language, users to securely and rapidly build robust applications. However, there are only available desktop versions of this IDE. For this reason, the objective of the proposed thesis is to understand what would be the best path for developing a Web-based version of the IDE. To achieve this, it is important not only to understand the OutSystems Platform and, more specifically, the architecture of the Service Studio IDE, which is the component IDE provided by the product, but also to explore the state-of-the-art technologies that could prove to be beneficial for the development of the project. The goal of this work is to debate different architectural possibilities to implement the project in question and present a conclusion as to what the adequate course of action, given the context of the problem. After distinguishing what are the biggest uncertainties and relevant points, a proof of concept is to be presented accompanied with the respective implementation details. Finally, this work intends to determine what would be a viable technological architecture to build a Web-based IDE that is capable of maintaining an acceptable performance, similarly to Service Studio IDE, while also insuring that the this system is scalable, in order to be able to provide the service to a large amount of users. That is to say, to present a conclusion regarding the feasibility of the project proposed.Devido ao aumento de popularidade de tecnologias de computação cloud e as suas inúmeras vantagens, aplicações desktop estão e vão continuar a ser migradas para a cloud para que possam ser acedidas através da web. Estas aplicações podem ser acedidas através de qualquer dispositivo que tenha acesso à internet, eliminando a necessidade de instalação e gestão de dependências. Além disso, o processo de introdução ao produto é simplificado, mais rápido e a colaboração é facilitada. A OutSystems é uma empresa que disponibiliza um software que faz com que utilizadores, através de um IDE e uma linguagem de baixo nível, possam criar aplicações robustas de forma rápida e segura. No entanto, atualmente só existem versões deste IDE para desktop. Como tal, o objetivo da tese proposta é perceber qual será a melhor forma de desenvolver uma versão do IDE sobre a Web. Para alcançar isto, é importante não só compreender a Plataforma OutSystems e, mais especificamente, a arquitetura do Service Studio IDE, que é o principal componente disponibilizado pelo produto, mas também explorar as tecnologias estado de arte que podem ser benéficas para o desenvolvimento do projeto. O objetivo deste trabalho é debater diferentes arquiteturas possíveis para a implementação do projeto e concluir qual será o curso de ação adequado, dado o contexto do problema. Após distinguir quais são os maiores pontos de incerteza, uma prova de conceito é apresentada juntamente com os respetivos detalhes de implementação. Finalmente, este trabalho tem como intenção detalhar uma arquitetura tecnológica viável para construir um IDE na web capaz de manter uma performance aceitável, semelhante à do Service Studio IDE, e garantir a escalabilidade do sistema, de forma a conseguir oferecer o serviço a um número elevado de utilizadores. Por outras palavras, apresentar uma conclusão em relação à viabilidade do projeto proposto

    Comparative study of Infrastructure as Code tools for Amazon Web Services

    Get PDF
    Cloud computing has become an integral part of modern software development. Infrastructure as Code (IaC) is an approach to managing infrastructure through code instead of manual processes. This thesis presents a comparative study of two popular IaC tools, AWS Cloud Development Kit (AWS CDK) and Terraform, for managing cloud resources on Amazon Web Services (AWS). The study investigates the key features, functionality, and benefits of each tool, as well as their strengths and weaknesses for AWS development. The research methodology involved a literature review, a practical implementation with both tools and then a comparison with the use of software quality metrics. The main qualities compared were performance, maintainability, and developer experience. The results show that both tools can define cloud infrastructure, have tools to support maintainability, and offer great developer experience. Terraform performed better in the performance comparison with faster infrastructure deployment and update operations. However, AWS CDK offers a higher level of abstraction, better integration with IDE tools, and allows developers to use their preferred programming language. The study concludes that AWS CDK is the preferred choice for IaC tool for AWS but recommends Terraform when working in multi-cloud environments or use cases where more mature tools are required

    Evaluating and Enabling Scalable High Performance Computing Workloads on Commercial Clouds

    Get PDF
    Performance, usability, and accessibility are critical components of high performance computing (HPC). Usability and performance are especially important to academic researchers as they generally have little time to learn a new technology and demand a certain type of performance in order to ensure the quality and quantity of their research results. We have observed that while not all workloads run well in the cloud, some workloads perform well. We have also observed that although commercial cloud adoption by industry has been growing at a rapid pace, its use by academic researchers has not grown as quickly. We aim to help close this gap and enable researchers to utilize the commercial cloud more efficiently and effectively. We present our results on architecting and benchmarking an HPC environment on Amazon Web Services (AWS) where we observe that there are particular types of applications that are and are not suited for the commercial cloud. Then, we present our results on architecting and building a provisioning and workflow management tool (PAW), where we developed an application that enables a user to launch an HPC environment in the cloud, execute a customizable workflow, and after the workflow has completed delete the HPC environment automatically. We then present our results on the scalability of PAW and the commercial cloud for compute intensive workloads by deploying a 1.1 million vCPU cluster. We then discuss our research into the feasibility of utilizing commercial cloud infrastructure to help tackle the large spikes and data-intensive characteristics of Transportation Cyberphysical Systems (TCPS) workloads. Then, we present our research in utilizing the commercial cloud for urgent HPC applications by deploying a 1.5 million vCPU cluster to process 211TB of traffic video data to be utilized by first responders during an evacuation situation. Lastly, we present the contributions and conclusions drawn from this work
    corecore