19 research outputs found
Multi-level monitoring and rule based reasoning in the adaptation of time-critical cloud applications
Nowadays, different types of online services are often deployed and operated on the cloud since it offers a convenient on-demand model for renting resources and easy-to-use elastic infrastructures. Moreover, the modern software engineering discipline provides means to design time-critical services based on a set of components running in containers. Container technologies, such as Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin, etc. are enablers of highly dynamic cloud-based services capable to address continuously varying workloads. Due to their lightweight nature, they can be instantiated, terminated and managed very dynamically. Container-based cloud applications require sophisticated auto-scaling methods in order to operate under different workload conditions, such as drastically changing workload scenarios.
Imagine a cloud-based social media network website in which a piece of news suddenly becomes viral. On the one hand, in order to ensure the users’ experience, it is necessary to allocate enough computational resources before the workload intensity surges at runtime. On the other hand, renting expensive cloud-based resources can be unaffordable over a prolonged period of time. Therefore, the choice of an auto-scaling method may significantly affect important service quality parameters, such as response time and resource utilisation. Current cloud providers, such as Amazon EC2 and container orchestration systems, such as Kubernetes employ auto-scaling rules with static thresholds and rely mainly on infrastructure-related monitoring data, such as CPU and memory utilisation.
This thesis presents a new Dynamic Multi-Level (DM) auto-scaling method with dynamically changing thresholds used in auto-scaling rules which exploit not only infrastructure, but also application-level monitoring data. The new DM method is implemented to be employed according to our proposed innovative viable architecture for auto-scaling containerised applications. The new DM method is compared with seven existing auto-scaling methods in different synthetic and real-world workload scenarios. These auto-scaling approaches include Kubernetes Horizontal Pod Auto-scaling (HPA), 1\textsuperscript{st} method of Step Scaling (SS1), 2\textsuperscript{nd} method of Step Scaling (SS2), 1\textsuperscript{st} method of Target Tracking Scaling (TTS1), 2\textsuperscript{nd} method of Target Tracking Scaling (TTS2), 1\textsuperscript{st} method of static THRESHOLD-based scaling (THRES1), and 2\textsuperscript{nd} method of static Threshold-based scaling (THRES2). All investigated auto-scaling methods are currently considered as advanced approaches, which are used in production systems such as Kubernetes, Amazon EC2, etc. Workload scenarios which are examined in this work also consist of slowly rising/falling workload pattern, drastically changing workload pattern, on-off workload pattern, gently shaking workload pattern, and real-world workload pattern.
Based on experimental results achieved for each workload pattern, all eight auto-scaling methods are compared according to the response time and the number of instantiated containers. The results as a whole show that the proposed DM method has better overall performance under varied amount of workloads than the other auto-scaling methods. Due to satisfactory results, the proposed DM method is implemented in the SWITCH software engineering system for time-critical cloud-based applications. Auto-scaling rules along with other properties, such as characteristics of virtualisation platforms, current workload, periodic QoS fluctuations and similar, are continuously stored as Resource Description Framework (RDF) triples in a Knowledge Base (KB), which is included in the proposed architecture. The primary reason to maintain the KB is to address different requirements of the SWITCH solution stakeholders, such as those of cloud-based service providers, allowing for seamless information integration, which can be used for long-term trends analysis and support to strategic planning
Multi-level monitoring and rule based reasoning in the adaptation of time-critical cloud applications
Nowadays, different types of online services are often deployed and operated on the cloud since it offers a convenient on-demand model for renting resources and easy-to-use elastic infrastructures. Moreover, the modern software engineering discipline provides means to design time-critical services based on a set of components running in containers. Container technologies, such as Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin, etc. are enablers of highly dynamic cloud-based services capable to address continuously varying workloads. Due to their lightweight nature, they can be instantiated, terminated and managed very dynamically. Container-based cloud applications require sophisticated auto-scaling methods in order to operate under different workload conditions, such as drastically changing workload scenarios.
Imagine a cloud-based social media network website in which a piece of news suddenly becomes viral. On the one hand, in order to ensure the users’ experience, it is necessary to allocate enough computational resources before the workload intensity surges at runtime. On the other hand, renting expensive cloud-based resources can be unaffordable over a prolonged period of time. Therefore, the choice of an auto-scaling method may significantly affect important service quality parameters, such as response time and resource utilisation. Current cloud providers, such as Amazon EC2 and container orchestration systems, such as Kubernetes employ auto-scaling rules with static thresholds and rely mainly on infrastructure-related monitoring data, such as CPU and memory utilisation.
This thesis presents a new Dynamic Multi-Level (DM) auto-scaling method with dynamically changing thresholds used in auto-scaling rules which exploit not only infrastructure, but also application-level monitoring data. The new DM method is implemented to be employed according to our proposed innovative viable architecture for auto-scaling containerised applications. The new DM method is compared with seven existing auto-scaling methods in different synthetic and real-world workload scenarios. These auto-scaling approaches include Kubernetes Horizontal Pod Auto-scaling (HPA), 1\textsuperscript{st} method of Step Scaling (SS1), 2\textsuperscript{nd} method of Step Scaling (SS2), 1\textsuperscript{st} method of Target Tracking Scaling (TTS1), 2\textsuperscript{nd} method of Target Tracking Scaling (TTS2), 1\textsuperscript{st} method of static THRESHOLD-based scaling (THRES1), and 2\textsuperscript{nd} method of static Threshold-based scaling (THRES2). All investigated auto-scaling methods are currently considered as advanced approaches, which are used in production systems such as Kubernetes, Amazon EC2, etc. Workload scenarios which are examined in this work also consist of slowly rising/falling workload pattern, drastically changing workload pattern, on-off workload pattern, gently shaking workload pattern, and real-world workload pattern.
Based on experimental results achieved for each workload pattern, all eight auto-scaling methods are compared according to the response time and the number of instantiated containers. The results as a whole show that the proposed DM method has better overall performance under varied amount of workloads than the other auto-scaling methods. Due to satisfactory results, the proposed DM method is implemented in the SWITCH software engineering system for time-critical cloud-based applications. Auto-scaling rules along with other properties, such as characteristics of virtualisation platforms, current workload, periodic QoS fluctuations and similar, are continuously stored as Resource Description Framework (RDF) triples in a Knowledge Base (KB), which is included in the proposed architecture. The primary reason to maintain the KB is to address different requirements of the SWITCH solution stakeholders, such as those of cloud-based service providers, allowing for seamless information integration, which can be used for long-term trends analysis and support to strategic planning
A Capillary Computing Architecture for Dynamic Internet of Things: Orchestration of Microservices from Edge Devices to Fog and Cloud Providers
The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS.
Document type: Articl
A Capillary Computing Architecture for Dynamic Internet of Things: Orchestration of Microservices from Edge Devices to Fog and Cloud Providers
The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS.
Document type: Articl
Towards an Environment Supporting Resilience, High-Availability, Reproducibility and Reliability for Clud Applications
Cloud Challenge at Utility and Cloud Computing 2015 (UCC 2015), Proceedings of the UCC 2015, Limassol, Cyprus
Monitoring self-adaptive applications within edge computing frameworks: A state-of-the-art review
Recently, a promising trend has evolved from previous centralized computation to decentralized edge
computing in the proximity of end-users to provide cloud applications. To ensure the Quality of Service
(QoS) of such applications and Quality of Experience (QoE) for the end-users, it is necessary to employ
a comprehensive monitoring approach. Requirement analysis is a key software engineering task in the
whole lifecycle of applications; however, the requirements for monitoring systems within edge computing
scenarios are not yet fully established. The goal of the present survey study is therefore threefold:
to identify the main challenges in the field of monitoring edge computing applications that are as yet
not fully solved; to present a new taxonomy of monitoring requirements for adaptive applications orchestrated
upon edge computing frameworks; and to discuss and compare the use of widely-used cloud
monitoring technologies to assure the performance of these applications. Our analysis shows that none of
existing widely-used cloud monitoring tools yet provides an integrated monitoring solution within edge
computing frameworks. Moreover, some monitoring requirements have not been thoroughly met by any
of them
Multi-level monitoring and rule based reasoning in the adaptation of time-critical cloud applications
Računalništvo v oblaku se dandanes uporablja za postavitev različnih programskih storitev, saj omogoča najemanje računskih virov po potrebi in enostavno nadgradljivost aplikacij. Sodobni pristopi programskega inženiringa omogočajo razvoj časovno-kritičnih oblačnih aplikacij na podlagi komponent, ki so nameščeni v vsebnikih. Tehnologije vsebnikov, kot so na primer Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin in podobno, omogočajo razvoj zelo dinamičnih oblačnih aplikacij, pod pogojih stalno spreminjajočih obremenitev. Oblačne aplikacije, ki temeljijo na tehnologijah vsebnikov zahtevajo prefinjene metode samodejnega prilagajanja, z namenom delovanja pod različnimi pogoji delovnih obremenitev, na primer pod pogojih drastičnih sprememb delovnih obremenitev.
Predstavljajmo si socialno omrežje, ki je oblačna aplikacija in v katerem se določena novica začne bliskovito širiti. Po eni strani potrebuje oblačna aplikacija zadosti računskih virov, še pred nastankom delovne obremenitve. Po drugi strani je najem dragih oblačnih infrastruktur v daljšem časovnem obdobju nepotreben in zato tudi nezaželen. Izbira metode samodejnega prilagajanja oblačne aplikacije tako pomembno vpliva na parametre kakovosti storitve, kot sta odzivni čas in stopnja uporabe računskih virov. Obstoječi sistemi za orkestracijo vsebnikov, kot sta npr. Kubernetes in sistem Amazon EC2, uporabljajo avtomatska pravila s statično določenimi pragovi, ki se zanašajo predvsem na infrastrukturne metrike, kot sta na primer uporaba procesorja in pomnilnika.
V tej doktorski disertaciji predstavljamo novo metodo dinamičnega večstopenjskega (angl. Dynamic Multi-Level, DM) samodejnega prilagajanja oblačnih aplikcij, ki uporablja poleg infrastrukturnih metrik tudi aplikacijske metrike s spreminjajočimi se pragovi. Novo DM metodo smo vgradili v delujočo arhitekturo sistema za samoprilagajanje aplikacij. Novo metodo DM primerjamo s sedmimi obstoječimi metodami samodejnega prilagajanja pri različnih scenarijih sintetičnih in realne delovne obremenitve. Primerljivi pristopi samodejnega prilagajanja vključujejo metode Kubernetes Horizontal Pod Auto-scaling (HPA), Step Scaling 1 in 2 (SS1, SS2), Target Tracking Scaling 1 in 2 (TTS1, TTS2) ter Static Threshold Based Scaling 1 in 2 (THRES1, THRES2). Vse obravnavane metode samodejnega prilagajanja trenutno štejejo kot zelo napredni pristopi, ki se uporabljajo v proizvodnih sistemih, kot so sistemi temelječi na tehnologijah Kubernetes in Amazon EC2. Scenariji delovnih obremenitev, ki jih uporabljamo v tem delu predstavljajo vzorce vztrajno naraščajočih/padajočih, drastično spreminjajočih, rahlih sprememb ter dejanskih delovnih obremenitev.
Na podlagi rezultatov poskusov, opravljenih za vsak vzorec delovnih obremenitev posebej, smo primerjali vseh osem izbranih metod samodejnega prilagajanja glede na odzivni čas in število instanciranih vsebnikov. Rezultati kot celota kažejo, da ima predlagana nova metoda DM večjo splošno samoprilagodljivost v primerjavi s preostalimi metodami. Zaradi zadovoljivih rezultatov smo predlagano metodo DM vgradili v sistem SWITCH za programski inženiring časovno-kritičnih oblačnih aplikacij.
Pravila za samoprilagajanje aplikacij in druge informacije, kot so na primer lastnosti platform za virtualizacijo, trenutne obremenitve aplikacije, ponavljajoče se zahteve po višji kakovosti storitev in podobno, se nenehno shranjujejo v obliki Resource Description Framework (RDF) trojk v bazi znanja, ki je tudi vključena v predlagani arhitekturi. Ključna zahteva za razvoj baze znanja, je omogočiti vsem deležnikom programske platforme SWITCH, kot so na primer ponudniki oblačnih storitev, možnost integracije informacij, analizo daljših trendov in podporo strateškemu planiranju.Nowadays, different types of online services are often deployed and operated on the cloud since it offers a convenient on-demand model for renting resources and easy-to-use elastic infrastructures. Moreover, the modern software engineering discipline provides means to design time-critical services based on a set of components running in containers. Container technologies, such as Docker, Kubernetes, CoreOS, Swarm, OpenShift Origin, etc. are enablers of highly dynamic cloud-based services capable to address continuously varying workloads. Due to their lightweight nature, they can be instantiated, terminated and managed very dynamically. Container-based cloud applications require sophisticated auto-scaling methods in order to operate under different workload conditions, such as drastically changing workload scenarios.
Imagine a cloud-based social media network website in which a piece of news suddenly becomes viral. On the one hand, in order to ensure the users’ experience, it is necessary to allocate enough computational resources before the workload intensity surges at runtime. On the other hand, renting expensive cloud-based resources can be unaffordable over a prolonged period of time. Therefore, the choice of an auto-scaling method may significantly affect important service quality parameters, such as response time and resource utilisation. Current cloud providers, such as Amazon EC2 and container orchestration systems, such as Kubernetes employ auto-scaling rules with static thresholds and rely mainly on infrastructure-related monitoring data, such as CPU and memory utilisation.
This thesis presents a new Dynamic Multi-Level (DM) auto-scaling method with dynamically changing thresholds used in auto-scaling rules which exploit not only infrastructure, but also application-level monitoring data. The new DM method is implemented to be employed according to our proposed innovative viable architecture for auto-scaling containerised applications. The new DM method is compared with seven existing auto-scaling methods in different synthetic and real-world workload scenarios. These auto-scaling approaches include Kubernetes Horizontal Pod Auto-scaling (HPA), 1textsuperscript{st} method of Step Scaling (SS1), 2textsuperscript{nd} method of Step Scaling (SS2), 1textsuperscript{st} method of Target Tracking Scaling (TTS1), 2textsuperscript{nd} method of Target Tracking Scaling (TTS2), 1textsuperscript{st} method of static THRESHOLD-based scaling (THRES1), and 2textsuperscript{nd} method of static Threshold-based scaling (THRES2). All investigated auto-scaling methods are currently considered as advanced approaches, which are used in production systems such as Kubernetes, Amazon EC2, etc. Workload scenarios which are examined in this work also consist of slowly rising/falling workload pattern, drastically changing workload pattern, on-off workload pattern, gently shaking workload pattern, and real-world workload pattern.
Based on experimental results achieved for each workload pattern, all eight auto-scaling methods are compared according to the response time and the number of instantiated containers. The results as a whole show that the proposed DM method has better overall performance under varied amount of workloads than the other auto-scaling methods. Due to satisfactory results, the proposed DM method is implemented in the SWITCH software engineering system for time-critical cloud-based applications. Auto-scaling rules along with other properties, such as characteristics of virtualisation platforms, current workload, periodic QoS fluctuations and similar, are continuously stored as Resource Description Framework (RDF) triples in a Knowledge Base (KB), which is included in the proposed architecture. The primary reason to maintain the KB is to address different requirements of the SWITCH solution stakeholders, such as those of cloud-based service providers, allowing for seamless information integration, which can be used for long-term trends analysis and support to strategic planning
A Capillary Computing Architecture for Dynamic Internet of Things: Orchestration of Microservices from Edge Devices to Fog and Cloud Providers
The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS
A capillary computing architecture for dynamic Internet of Things
The adoption of advanced Internet of Things (IoT) technologies has impressively improved in recent years by placing such services at the extreme Edge of the network. There are, however, specific Quality of Service (QoS) trade-offs that must be considered, particularly in situations when workloads vary over time or when IoT devices are dynamically changing their geographic position. This article proposes an innovative capillary computing architecture, which benefits from mainstream Fog and Cloud computing approaches and relies on a set of new services, including an Edge/Fog/Cloud Monitoring System and a Capillary Container Orchestrator. All necessary Microservices are implemented as Docker containers, and their orchestration is performed from the Edge computing nodes up to Fog and Cloud servers in the geographic vicinity of moving IoT devices. A car equipped with a Motorhome Artificial Intelligence Communication Hardware (MACH) system as an Edge node connected to several Fog and Cloud computing servers was used for testing. Compared to using a fixed centralized Cloud provider, the service response time provided by our proposed capillary computing architecture was almost four times faster according to the 99th percentile value along with a significantly smaller standard deviation, which represents a high QoS
A Semantic Model for Interchangeable Microservices in Cloud Continuum Computing
The rapid growth of new computing models that exploit the cloud continuum has a big impact on the adoption of microservices, especially in dynamic environments where the amount of workload varies over time or when Internet of Things (IoT) devices dynamically change their geographic location. In order to exploit the true potential of cloud continuum computing applications, it is essential to use a comprehensive set of various intricate technologies together. This complex blend of technologies currently raises data interoperability problems in such modern computing frameworks. Therefore, a semantic model is required to unambiguously specify notions of various concepts employed in cloud applications. The goal of the present paper is therefore twofold: (i) offering a new model, which allows an easier understanding of microservices within adaptive fog computing frameworks, and (ii) presenting the latest open standards and tools which are now widely used to implement each class defined in our proposed model