40 research outputs found

    Enabling Distributed Applications Optimization in Cloud Environment

    Get PDF
    The past few years have seen dramatic growth in the popularity of public clouds, such as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Container-as-a-Service (CaaS). In both commercial and scientific fields, quick environment setup and application deployment become a mandatory requirement. As a result, more and more organizations choose cloud environments instead of setting up the environment by themselves from scratch. The cloud computing resources such as server engines, orchestration, and the underlying server resources are served to the users as a service from a cloud provider. Most of the applications that run in public clouds are the distributed applications, also called multi-tier applications, which require a set of servers, a service ensemble, that cooperate and communicate to jointly provide a certain service or accomplish a task. Moreover, a few research efforts are conducting in providing an overall solution for distributed applications optimization in the public cloud. In this dissertation, we present three systems that enable distributed applications optimization: (1) the first part introduces DocMan, a toolset for detecting containerized application’s dependencies in CaaS clouds, (2) the second part introduces a system to deal with hot/cold blocks in distributed applications, (3) the third part introduces a system named FP4S, a novel fragment-based parallel state recovery mechanism that can handle many simultaneous failures for a large number of concurrently running stream applications

    Containerization in Cloud Computing: performance analysis of virtualization architectures

    Get PDF
    La crescente adozione del cloud è fortemente influenzata dall’emergere di tecnologie che mirano a migliorare i processi di sviluppo e deployment di applicazioni di livello enterprise. L’obiettivo di questa tesi è analizzare una di queste soluzioni, chiamata “containerization” e di valutare nel dettaglio come questa tecnologia possa essere adottata in infrastrutture cloud in alternativa a soluzioni complementari come le macchine virtuali. Fino ad oggi, il modello tradizionale “virtual machine” è stata la soluzione predominante nel mercato. L’importante differenza architetturale che i container offrono ha portato questa tecnologia ad una rapida adozione poichè migliora di molto la gestione delle risorse, la loro condivisione e garantisce significativi miglioramenti in termini di provisioning delle singole istanze. Nella tesi, verrà esaminata la “containerization” sia dal punto di vista infrastrutturale che applicativo. Per quanto riguarda il primo aspetto, verranno analizzate le performances confrontando LXD, Docker e KVM, come hypervisor dell’infrastruttura cloud OpenStack, mentre il secondo punto concerne lo sviluppo di applicazioni di livello enterprise che devono essere installate su un insieme di server distribuiti. In tal caso, abbiamo bisogno di servizi di alto livello, come l’orchestrazione. Pertanto, verranno confrontate le performances delle seguenti soluzioni: Kubernetes, Docker Swarm, Apache Mesos e Cattle

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Active Disaster Recovery Strategy for Applications Deployed Across Multiple Kubernetes Clusters, Using Service Mesh and Serverless Workloads

    Get PDF
    The popularity of cloud computing has gained significantly throughout the recent years. There would be no cloud computing without Virtualization technologies. Virtualization is the foundation of cloud computing, and containerization is the next generation. Kubernetes is one of the most highly used container orchestration solutions available. It provides clusters with a set of control planes and workers to manage the containers' lifecycles. Deploying an application across multiple clusters provides features such as high availability, isolation, and scalability to the system. Kubernetes is a great tool for managing a single cluster; however, it has limitations in multi-cluster management. One of the fundamental approaches to multi-cluster Kubernetes is utilizing a Kubernetes network service mesh solution. This way, all clusters are meshed across the network. However, another big challenge is architecting an application deployment across geo-graphically separated clusters. Any failure in one cluster or a running application service can impact other clusters causing a disaster in the whole system. In this thesis, we propose and design an active disaster recovery strategy for applications that are spread across multiple Kubernetes clusters, eliminating the failure points. Meanwhile, part of the application will run on a serverless platform hosted on one of the clusters to provide higher performance and optimize resource utilization. Such use cases are the clusters running on the edge of the cloud or backup clusters running in the same region in case there is a burst of unpredictable incoming traffic to the system. The performance and resource utilization of the designed solution was evaluated by running several experiments. The experiments simulate several failure scenarios, and the designed architect availability was promising and practical to implement

    A Ring to Rule Them All - Revising OpenStack Internals to Operate Massively Distributed Clouds: The Discovery Initiative - Where Do We Are ?

    Get PDF
    STACK_HCERES2020The deployment of micro/nano data-centers in network point of presence offers an opportunity to deliver a more sustainable and efficient infrastructure for Cloud Computing. Among the different challenges we need to address to favor the adoption of such a model, the development of a system in charge of turning such a complex and diverse network of resources into a collection of abstracted computing facilities that are convenient to administrate and use is critical.In this report, we introduce the premises of such a system. The novelty of our work is that instead of developing a system from scratch, we revised the OpenStack solution in order to operate such an infrastructure in a distributed manner leveraging P2P mechanisms. More precisely, we describe how we revised the Nova service by leveraging a distributed key/value store instead of the centralized SQL backend. We present experiments that validated the correct behavior of our prototype, while having promising performance using several clusters composed of servers of the Grid’5000 testbed. We believe that such a strategy is promising and paves the way to a first large-scale and WAN-wide IaaS manager.La tendance actuelle pour supporter la demande croissante d'informatique utilitaire consiste à construire des centres de données de plus en plus grands, dans un nombre limité de lieux stratégiques. Cette approche permet sans aucun doute de satisfaire la demande actuelle tout en conservant une approche centralisée de la gestion de ces ressources, mais elle reste loin de pouvoir fournir des infrastructures répondant aux contraintes actuelles et futures en termes d'efficacité, de juridiction ou encore de durabilité. L'objectif de l'initiative DISCOVERY est de concevoir le LUC OS, un système de gestion distribuée des ressources qui permettra de tirer parti de n'importe quel noeud réseau constituant la dorsale d'Internet afin de fournir une nouvelle génération d'informatique utilitaire, plus apte à prendre en compte la dispersion géographiquedes utilisateurs et leur demande toujours croissante.Après avoir rappelé les objectifs de l'initiative DISCOVERY et expliqué pourquoi les approches type fédération ne sont pas adaptées pour opérer une infrastructure d'informatique utilitaire intégrée au réseau, nous présentons les prémisses de notre système. Nous expliquerons notamment pourquoi et comment nous avons choisi de démarrer des travaux visant à revisiter la conception de la solution Openstack. De notre point de vue, choisir d'appuyer nos travaux sur cette solution est une stratégie judicieuse à la vue de la complexité des systèmes de gestion des plateformes IaaS et de la vélocité des solutions open-source

    An Analysis Of Standardized Data For Fog Computing Storage Capacity Using Non-Relational Database

    Get PDF
    Computer applications nowadays relies on physical storage either to store the computation information or for the application itself. As the application become more complex and being used day after day, the data will be growing causing issue with lack of free space especially in fog computing where the storage resource is limited. Despite increasing disk storage or even migrating the data into the cloud would resolve this issue but this would also increase the overall cost as well. Thus, the objective of this paper is to analyse the difference between non-relational database with relational database in term of storage capacity. First the data from relational database will be taken and converted into a standard format and later turned into non-relational database. The result from the analysis will provide motivation for proposing database storage to be implemented in fog computing environmen

    Network Service Availability and Continuity Management in the Context of Network Function Virtualization

    Get PDF
    In legacy computer systems, network functions (e.g., routers, firewalls, etc.) have been provided by specialized hardware appliances to realize Network Services (NS). In recent years, the rise of Network Function Virtualization (NFV) has changed how we realize NSs. With NFV, commercial off-the-shelf hardware and virtualization technologies are used to create Virtual Network Functions (VNF). In the context of NFV, an NS is realized by interconnecting VNFs using Virtual Links (VL). Service availability and continuity are among the important non-functional characteristics of NSs. Availability is defined as the fraction of time the NS functionality is provided in a period. Current work on NS availability, in the NFV context, focuses on determining the appropriate number of redundant VNFs and their deployment in the virtualized environment, and the redundancy of network paths. Such solutions are necessary but insufficient because redundancy does not guarantee that the overall service outage time for an NS functionality remains below a certain threshold. Moreover, service disruption which impacts the service continuity is not addressed in the current work quantitatively. In addition, NSs and VNFs elasticity and the dynamicity of virtualized infrastructures which can impact the availability of NS functionalities, are not considered in the current state of the art. In this thesis, we propose a framework for NS availability and continuity management, which consists of two approaches, one for design time and another for runtime adaptation. For this, we define service disruption time for an NS functionality as the amount of time for which the service data is lost due to service outages for a given period. We also define the service data disruption for an NS functionality as the maximum amount of data lost due to a service outage. The design-time approach includes analytical methods which take acceptable service disruption and availability requirements of the tenant, a designed NS, and a given infrastructure as inputs to adjust the NS design and map these requirements to constraints on low-level configuration parameters. Design-time approach guarantees the service availability and continuity requirements will be met as long as the availability characteristics of the infrastructure resources used by the NS constituents do not change at runtime. However, changes in the supporting infrastructure may happen at runtime due to multiple reasons like failover, upgrades, and aging. Therefore, we propose a runtime adaptation approach that reacts to changes at runtime and adjusts the configuration parameters accordingly to satisfy the same service availability and continuity requirements. The runtime approach uses machine learning models, which are created at design time, to determine the required adjustments at runtime. To demonstrate the feasibility of the proposed solutions and to experiment with them, we present a proof of concept, including prototypes of our approaches and their application in a small NFV cloud environment created for validation purposes. We conduct multiple experiments for two case studies with different service availability and continuity requirements. The results from the conducted experiments show that our approaches can guarantee the fulfillment of the service availability and continuity requirements

    Serverless Cloud Computing: A Comparative Analysis of Performance, Cost, and Developer Experiences in Container-Level Services

    Get PDF
    Serverless cloud computing is a subset of cloud computing considerably adopted to build modern web applications, while the underlying server and infrastructure management duties are abstracted from customers to the cloud vendors. In serverless computing, customers must pay for the runtime consumed by their services, but they are exempt from paying for the idle time. Prior to serverless containers, customers needed to provision, scale, and manage servers, which was a bottleneck for rapidly growing customer-facing applications where latency and scaling were a concern. The viability of adopting a serverless platform for a web application regarding performance, cost, and developer experiences is studied in this thesis. Three serverless container-level services are employed in this study from AWS and GCP. The services include GCP Cloud Run, GKE AutoPilot, and AWS EKS with AWS Fargate. Platform as a Service (PaaS) underpins the former, and Container as a Service (CaaS) the remainder. A single-page web application was created to perform incremental and spike load tests on those services to assess the performance differences. Furthermore, the cost differences are compared and analyzed. Lastly, the final element considered while evaluating the developer experiences is the complexity of using the services during the project implementation. Based on the results of this research, it was determined that PaaS-based solutions are a high-performing, affordable alternative for CaaS-based solutions in circumstances where high levels of traffic are periodically anticipated, but sporadic latency is never a concern. Given that this study has limitations, the author recommends additional research to strengthen it
    corecore