2,051 research outputs found

    Implementation of DevOps pipeline for Serverless Applications

    Get PDF
    Serverless computing is a cloud computing execution model where server-side logic runs in the stateless compute containers that are event-triggered and usually fully managed by vendor hosts such as AWS Lambda. This approach is also called Function as a Service (FaaS). Applications that rely on this approach are called Serverless applications. Serverless usage promises infrastructure cost reduction and automatic scalability. One more important benefit of serverless is making the operations part of DevOps process simpler. It reduces the time on the management and maintenance of the servers and sometimes makes them even completely unnecessary. Despite this fact, applications using serverless computing model require a new look at DevOps automation practices since it is a new approach to software architecture design and software development workflow. The goal of this thesis is to implement DevOps pipeline for a Serverless application within a single case organization and evaluate the results of implementation. This is done through design science research, where result artifact is a release pipeline designed and implemented according to the requirements for a new project in the case organization. The result of the study is automated DevOps pipeline with implemented Continuous Integration (CI), Continuous Delivery (CD) and Monitoring practices required for the case project. The research shows that architecture of Serverless applications affects many DevOps automation practices such as test execution, deployment and monitoring of the application. It also affects the decisions about source code repositories structure, mocking libraries and Infrastructure as Code (IaC) tools

    Serving deep learning models in a serverless platform

    Full text link
    Serverless computing has emerged as a compelling paradigm for the development and deployment of a wide range of event based cloud applications. At the same time, cloud providers and enterprise companies are heavily adopting machine learning and Artificial Intelligence to either differentiate themselves, or provide their customers with value added services. In this work we evaluate the suitability of a serverless computing environment for the inferencing of large neural network models. Our experimental evaluations are executed on the AWS Lambda environment using the MxNet deep learning framework. Our experimental results show that while the inferencing latency can be within an acceptable range, longer delays due to cold starts can skew the latency distribution and hence risk violating more stringent SLAs
    • …
    corecore