292 research outputs found

    RAYGO: Reserve As You GO

    Get PDF
    The capability to predict the precise resource requirements of a microservice-based application is a very important problem for cloud services. In fact, the allocation of abundant resources guarantees an excellent quality of experience (QoE) for the hosted services, but it can translate into unnecessary costs for the cloud customer due to the reserved (but unused) resources. On the other side, poor resource provisioning may turn out in scarce performance when experiencing an unexpected peak of demand. This paper proposes RAYGO, a novel approach for dynamic resource provisioning to microservices in Kubernetes that (i) reliefs the customers from the definition of appropriate execution boundaries, (ii) ensures the right amount of resources at any time, according to the past and the predicted usage, and (iii) operates at the application level, acknowledging the dependency between multiple correlated microservices

    Distributed microservices evaluation in edge computing

    Get PDF
    Abstract. Current Internet of Things applications rely on centralized cloud computing when processing data. Future applications, such as smart cities, homes, and vehicles, however, generate so much data that cloud computing is unable to provide the required Quality of Service. Thus, edge computing, which pulls data and related computation from distant data centers to the network edge, is seen as the way forward in the evolution of the Internet of Things. The traditional cloud applications, implemented as centralized server-side monoliths, may prove unfavorable for edge systems, due to the distributed nature of the network edge. On the other hand, the recent development practices of containerization and microservices seem like an attractive choice for edge application development. Containerization enables edge computing to use lightweight virtualized resources. Microservices modularize application on the functional level into small, independent packages. This thesis studies the impact of containers and distributed microservices on edge computing, based on service execution latency and energy consumption. Evaluation is done by developing a monolithic and a distributed microservice version of a user mobility analysis service. Both services are containerized with Docker and deployed on resource-constrained edge devices to conduct measurements in real-world settings. Collected results show that centralized monoliths provide lower latencies for small amounts of data, while distributed microservices are faster for large amounts of data. Partitioning services onto multiple edge devices is shown to increases energy consumption significantly.Hajautettujen mikropalveluiden arviointi reunalaskennassa. Tiivistelmä. Nykyiset Esineiden Internet -järjestelmät hyödyntävät keskitettyä pilvilaskentaa datan prosessointiin. Tulevaisuuden sovellusalueet, kuten älykkäät kaupungit, kodit ja ajoneuvot tuottavat kuitenkin niin paljon dataa, ettei pilvilaskenta pysty täyttämään tarvittavia sovelluspalveluiden laatukriteerejä. Pilvipohjainen sovellusten toteutus on osoittautunut sopimattomaksi hajautetuissa tietoliikenneverkoissa tiedonsiirron viiveiden takia. Täten laskennan ja datan siirtämistä tietoliikenneverkkojen päätepisteisiin reunalaskentaa varten pidetään tärkeänä osana Esineiden Internetin kehitystä. Pilvisovellusten perinteinen keskitetty monoliittinen toteutus saattaa osoittautua sopimattomiksi reunajärjestelmille tietoliikenneverkkojen hajautetun infrastruktuurin takia. Kontit ja mikropalvelut vaikuttavat houkuttelevilta vaihtoehdoilta reunasovellusten suunnitteluun ja toteutukseen. Kontit mahdollistavat reunalaskennalle kevyiden virtualisoitujen resurssien käytön ja mikropalvelut jakavat sovellukset toiminnallisella tasolla pienikokoisiin itsenäisiin osiin. Tässä työssä selvitetään konttien ja hajautettujen mikropalveluiden toteutustavan vaikutusta viiveeseen ja energiankulutukseen reunalaskennassa. Arviointi tehdään todellisessa ympäristössä toteuttamalla mobiilikäyttäjien liikkumista kaupunkialueella analysoiva keskitetty monoliittinen palvelu sekä vastaava hajautettu mikropalvelupohjainen toteutus. Molemmat versiot kontitetaan ja otetaan käyttöön verkon reunalaitteilla, joiden laskentateho on alhainen. Tuloksista nähdään, että keskitettyjen monoliittien viive on alhaisempi pienille datamäärille, kun taas hajautetut mikropalvelut ovat nopeampia suurille määrille dataa. Sovelluksen jakaminen usealle reunalaitteelle kasvatti energiankulutusta huomattavasti

    From Monolithic Systems to Microservices: An Assessment Framework

    Get PDF
    Context. Re-architecting monolithic systems with Microservices-based architecture is a common trend. Various companies are migrating to Microservices for different reasons. However, making such an important decision like re-architecting an entire system must be based on real facts and not only on gut feelings. Objective. The goal of this work is to propose an evidence-based decision support framework for companies that need to migrate to Microservices, based on the analysis of a set of characteristics and metrics they should collect before re-architecting their monolithic system. Method. We designed this study with a mixed-methods approach combining a Systematic Mapping Study with a survey done in the form of interviews with professionals to derive the assessment framework based on Grounded Theory. Results. We identified a set consisting of information and metrics that companies can use to decide whether to migrate to Microservices or not. The proposed assessment framework, based on the aforementioned metrics, could be useful for companies if they need to migrate to Microservices and do not want to run the risk of failing to consider some important information

    Scalable Multi-cloud Platform to Support Industry and Scientific Applications

    Get PDF
    Cloud computing offers resources on-demand and without large capital investments. As such, it is attractive to many industry and scientific application areas that require large computation and storage facilities. Although Infrastructure as a Service (IaaS) clouds provide elasticity and on demand resource access, the challenges represented by multi-cloud capabilities and application level scalability are still largely unsolved. The CloudSME Simulation Platform (CSSP) extended with the Microservices-based Cloud Application-level Dynamic Orchestrator (MiCADO) addresses such issues. CSSP is a generic multi-cloud access platform for the development and execution of large scale industry and scientific simulations on heterogeneous cloud resources. MiCADO provides application level scalability to optimise execution time and costs. This paper outlines how these technologies have been developed in various European research projects, and showcases several application case-studies from manufacturing, engineering and life-sciences where these tools have been successfully utilised to execute large-scale simulations in an optimised way on heterogeneous cloud infrastructures

    Scalability Benchmarking of Cloud-Native Applications Applied to Event-Driven Microservices

    Get PDF
    Cloud-native applications constitute a recent trend for designing large-scale software systems. This thesis introduces the Theodolite benchmarking method, allowing researchers and practitioners to conduct empirical scalability evaluations of cloud-native applications, their frameworks, configurations, and deployments. The benchmarking method is applied to event-driven microservices, a specific type of cloud-native applications that employ distributed stream processing frameworks to scale with massive data volumes. Extensive experimental evaluations benchmark and compare the scalability of various stream processing frameworks under different configurations and deployments, including different public and private cloud environments. These experiments show that the presented benchmarking method provides statistically sound results in an adequate amount of time. In addition, three case studies demonstrate that the Theodolite benchmarking method can be applied to a wide range of applications beyond stream processing

    Enterprise Composition Architecture for Micro-Granular Digital Services and Products

    Get PDF
    The digitization of our society changes the way we live, work, learn, communicate, and collaborate. This defines the strategical context for composing resilient enterprise architectures for micro-granular digital services and products. The change from a closed-world modeling perspective to more flexible open-world composition and evolution of system architectures defines the moving context for adaptable systems, which are essential to enable the digital transformation. Enterprises are presently transforming their strategy and culture together with their processes and information systems to become more digital. The digital transformation deeply disrupts existing enterprises and economies. Since years a lot of new business opportunities appeared using the potential of the Internet and related digital technologies, like Internet of Things, services computing, cloud computing, big data with analytics, mobile systems, collaboration networks, and cyber physical systems. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things or mobile systems. In this paper, we are focusing on the continuous bottom-up integration of micro-granular architectures for a huge amount of dynamically growing systems and services, like Internet of Things and Microservices, as part of a new digital enterprise architecture. To integrate micro-granular architecture models to living architectural model versions we are extending more traditional enterprise architecture reference models with state of art elements for agile architectural engineering to support the digitalization of services with related products, and their processes
    corecore