8 research outputs found

    Cloud native computing for Industry 4.0: Challenges and opportunities

    Get PDF
    Proceedings of: 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA ), 7-10 Sept. 2021, Vasteras, Sweden.Cloud-based architectures are advantageous in aspects such as scalability, reliability and resource utilization efficiency, to name just a few, thus being considered one of the pillars of Industry 4.0. However, in this domain, cloud computing platforms are subject to specific requirements, namely in what concerns real-time performance, determinism and fault-tolerance. This paper focuses on cloud native computing, which is an emerging and promising cloud-computing paradigm, specifically addressing its applicability to real-time systems. Firstly, it introduces the architecture of cloud native applications, discussing their principles, potential advantages and challenges. Then it addresses the opportunities and constraints of such technologies when applied to industrial real-time systems.This work has been supported by EC H2020 5GPPP 5Growth project (Grant 856709)

    A Deep Reinforcement Learning based Algorithm for Time and Cost Optimized Scaling of Serverless Applications

    Full text link
    Serverless computing has gained a strong traction in the cloud computing community in recent years. Among the many benefits of this novel computing model, the rapid auto-scaling capability of user applications takes prominence. However, the offer of adhoc scaling of user deployments at function level introduces many complications to serverless systems. The added delay and failures in function request executions caused by the time consumed for dynamically creating new resources to suit function workloads, known as the cold-start delay, is one such very prevalent shortcoming. Maintaining idle resource pools to alleviate this issue often results in wasted resources from the cloud provider perspective. Existing solutions to address this limitation mostly focus on predicting and understanding function load levels in order to proactively create required resources. Although these solutions improve function performance, the lack of understanding on the overall system characteristics in making these scaling decisions often leads to the sub-optimal usage of system resources. Further, the multi-tenant nature of serverless systems requires a scalable solution adaptable for multiple co-existing applications, a limitation seen in most current solutions. In this paper, we introduce a novel multi-agent Deep Reinforcement Learning based intelligent solution for both horizontal and vertical scaling of function resources, based on a comprehensive understanding on both function and system requirements. Our solution elevates function performance reducing cold starts, while also offering the flexibility for optimizing resource maintenance cost to the service providers. Experiments conducted considering varying workload scenarios show improvements of up to 23% and 34% in terms of application latency and request failures, while also saving up to 45% in infrastructure cost for the service providers.Comment: 15 pages, 22 figure

    Topology-based Scheduling in Serverless Computing Platforms

    Get PDF
    In the past few years, Function as a Service (FaaS) solutions, and Serverless computing in general, have become a significant topic both in terms of general interest and research effort. Allowing users to run stateless code in the cloud without worrying about the underlying infrastructure for scheduling, management and scaling, the ease of use of these approaches still comes with various trade-offs and challenges. In this thesis, the issue of data locality is observed, using an extension of the Apache OpenWhisk framework to provide users the ability to select the node they wish to use to schedule some of their functions, allowing the code to be run closer to the data it manipulates. Additionally, a topology-based scheduling approach is implemented for the framework, where load balancers are instructed to prioritize nodes in their same topological zone; this way, users can specify a preferred load balancer for different functions, with no need to know the position and name of all other nodes in the cluster. This modified version of the OpenWhisk framework is then compared with the standard OpenWhisk implementation, along with two other serverless frameworks, Fission and OpenFaaS, using a test suite composed of different use cases, using both existing projects from the Wonderless dataset and custom-built functions targeting different aspects of the paradigm. The role of data locality considerations and topology-based policies is analyzed, showing their importance in a multi-zone cluster with nodes in various geographical locations, where latency between them and the remote data used by the functions can be significant

    "Serverless Computing", Funciones como Servicio (FaaS) para el soporte de cargas computacionales en la nube

    Full text link
    [ES] El trabajo realizado ha consistido en la creación de una plataforma de ejecución de funciones como servicio (FaaS), pues constituye la pieza central del paradigma de computación serverless. Esta plataforma toma como prioridad el aislamiento entre diferentes invocaciones de las funciones, con el fin de asegurar el determinismo en sus resultados, así como la seguridad para la información de los usuarios. El proyecto ha avanzado de la siguiente manera. Se ha llevado a cabo el estudio de plataformas FaaS open source existentes, para analizarlas e identificar sus aspectos de mejora. Partiendo de estos aspectos, en concreto el aislamiento, y otros inherentes al concepto fundamental de FaaS, como el rendimiento y la alta disponibilidad, se ha realizado un diseño de la arquitectura de la plataforma. Este diseño ha servido como guía para el desarrollo de un prototipo, que cumple con las funcionalidades relativas a un sistema FaaS y es configurable, permitiendo la programación de nuevas heurísticas para la carga y ejecución de funciones. Por último, se ha probado el sistema desarrollado en este proyecto, validando su correcto funcionamiento y midiendo su rendimiento, comparando las heurísticas propuestas entre ellas.[EN] The proposed project has consisted in the making of a function as a service (FaaS) execution platform, as it constitutes the central piece of the serverless computing paradigm. This platform takes as a priority the isolation between the different function invocations, in order to assure the determinism in its results, as well as the security for the users¿ data. The project has progressed the following way. The current open source FaaS platforms have been studied, in order to analyze them and identify their potential improvement aspects. Taking those aspects as a reference, specifically the isolation, and other ones regarding the fundamental concept of a FaaS, as the performance and the high availability, a platform architecture design has been done. This design has served as a guide for the development of a prototype, that fulfills the functionalities related to a FaaS system and it¿s configurable, allowing the programming of new heuristics for the function loading and execution. Lastly, the system developed on this project has been tested, validating its correct behavior and measuring its performance, comparing the proposed heuristics between them.Rodriguez Dominguez, J. (2020). "Serverless Computing", Funciones como Servicio (FaaS) para el soporte de cargas computacionales en la nube. Universitat Politècnica de València. http://hdl.handle.net/10251/159165TFG

    Practical aspects of FaaS applications' migration

    Get PDF
    With the huge variety of available FaaS platforms in cloud and self-hosted environments the idea of migrating function applications from one provider to another is becoming a important consideration. This work investigates the challenges developers encounter when manually migrating applications between Amazon Web Services, Microsoft Azure and IBM Cloud regarding the efforts needed to migrate the functions and the services. This work also proposes a simple approach to reduce the coupling between the function application and the cloud provider by externalizing the business logic into a serparate, completely vendor independant, package. We see that this approach reduces the efforts needed to migrate the source code to another provider but it does not reduce the effort of migrating the functions configuration and services. We see that the efforts for migration are not only affected by the migration of the source code but also by the migration of the services, especially in self-hosted environments. There developers also have to find a proper substitution of the service for their use-case.Bei der Vielzahl der verfügbaren FaaS-Plattformen in Cloud- und selbst gehosteten Umgebungen wird die Idee der Migration von Funktionsanwendungen von einem Anbieter zum anderen immer wichtiger. Diese Arbeit untersucht die Herausforderungen, denen Entwickler bei der manuellen Migration von Anwendungen zwischen Amazon Web Services, Microsoft Azure und IBM Cloud hinsichtlich des Aufwands für die Migration der Funktionen und Dienste begegnen. Diese Arbeit schlägt auch einen einfachen Ansatz vor, um die Kopplung zwischen der Funktionsanwendung und dem Cloud-Provider zu reduzieren, indem die Geschäftslogik in ein separates, völlig herstellerunabhängiges Paket ausgelagert wird. Wir sehen, dass dieser Ansatz den Aufwand für die Migration des Quellcodes zu einem anderen Anbieter reduziert, aber nicht den Aufwand für die Migration der Funktionskonfiguration und der Dienste. Wir sehen, dass die Bemühungen um die Migration nicht nur von der Quellcode-Migration, sondern auch von der Migration der Dienste, insbesondere in selbst gehosteten Umgebungen, beeinflusst werden. Dort müssen Entwickler auch einen geeigneten Ersatz für den Dienst in ihren Anwendungsfall finden
    corecore