10 research outputs found

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    Elastic Allocation of Docker Containers in Cloud Environments

    Get PDF
    Abstract Docker containers wrap up a piece of software together with everything it needs for the execution and enable to easily run it on any machine. For their execution in the Cloud, we need to identify an elastic set of virtual machines that can accommodate those containers, while considering the diversity of their requirements. In this paper, we briefly describe our formulation of the Elastic provisioning of Virtual machines for Container Deployment (EVCD), which takes explicitly into account the heterogeneity of container requirements and virtual machine resources. Afterwards, we evaluate the EVCD formulation with the aim of demonstrating its flexibility in optimizing multiple QoS metrics

    Quality of Service Assurance for Internet of Things Time-Critical Cloud Applications: Experience with the Switch and Entice Projects

    Full text link

    Orchestration and scheduling of containers in highly available environments

    Get PDF
    We are in a period of accelerated adoption of containers - every month more and more companies are introducing and exploring the possibility of using this technology in their environment. Despite the fact that the technology and tools associated with its use are not yet fully standardized, they still bring a lot of benefits. They enable simplified development, empower the usage of microservices and provide a better platform to adjust the capacity of the resources, based on traffic. Nevertheless, containers themselves are not enough for production environments where there are strict requirements such as high availability, fault tolerance, self-healing, etc. That is why we also examined the current orchestration tools, using containers as a central part of the technology. Based on the needs of our company, we picked the most suitable orchestration tools. We compared them with our existing system hosted in the cloud, and justified the decision to switch to containers. In this thesis, we first reviewed the containers technology, mostly focusing on Docker as the most popular implementation at the moment. Then we presented and analyzed the current orchestration tools - Mesos, Swarm and Kubernetes. After that we run performance tests and compared the results with performance of our existing system. We also compared the orchestration tools based on their functionality. Finally, we suggested the architectural solution suitable for the company and presented a pilot implementation. We show that the pilot meets the functional and performance requirements

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Middleware for large scale in situ analytics workflows

    Get PDF
    The trend to exascale is causing researchers to rethink the entire computa- tional science stack, as future generation machines will contain both diverse hardware environments and run times that manage them. Additionally, the science applications themselves are stepping away from the traditional bulk-synchronous model and are moving towards a more dynamic and decoupled environment where analysis routines are run in situ alongside the large scale simulations. This thesis presents CoApps, a middleware that allows in situ science analytics applications to operate in a location-flexible manner. Additionally, CoApps explores methods to extract information from, and issue management operations to, lower level run times that are managing the diverse hardware expected to be found on next generation exascale machines. This work leverages experience with several extremely scalable applications in materials and fusion, and has been evaluated on machines ranging from local Linux clusters to the supercomputer Titan.Ph.D

    Estudo comparativo entre infraestruturas de entrega de software

    Get PDF
    O desenvolvimento de software está cada vez mais célere devido a adoção de práticas e metodologias ágeis e, com isso, a infraestrutura está evoluindo para suportar essa demanda dos times de desenvolvimento. A evolução da infraestrutura acontece devido a dois movimentos: o movimento da cultura DevOps que promove autonomia e confiança entre os times de desenvolvimento e infraestrutura; e a infraestrutura ágil que torna a infraestrutura automatizada usando código. Junto a isso, a tecnologia de contêineres está sendo disruptiva para a construção de uma infraestrutura que suporte esse novo ciclo de vida mais ágil do desenvolvimento. Nesse período de desenvolvimento de software mais ágil, surge a necessidade de construir uma aplicação chamada Reditus para promover a educação no Brasil, através de bolsas estudantis que foram arrecadadas por meio de doações na plataforma. Decidir sobre adotar uma arquitetura de infraestrutura usando contêineres ou continuar no modelo já habitual usando máquinas virtuais depende da análise de diversos pontos da arquitetura de infraestrutura e da aplicação. Foram feitas simulações e testes, comparando métricas de tempo para criar a infraestrutura, entregar uma nova versão da aplicação, e de tempo de recuperação em caso de falha na entrega da aplicação. Além disso, foram feitas algumas entrevistas com técnicos da área para complementar na análise de qual a arquitetura deveria ser adotada. A arquitetura usando máquinas virtuais se mostra mais apropriada para times que tem um fluxo de entrega de software menos frequente, e times com pouco conhecimento de infraestrutura devido a simplicidade. A arquitetura usando contêineres é mais indicada para times de desenvolvimento que fazem entregas de software frequentes, que precisam de escalabilidade e procuram mais disponibilidade da aplicação. Com base nessa análise, a infraestrutura indicada para a aplicação Reditus foi a infraestrutura usando contêineres devido a necessidade do time de realizar entregas frequentes de versões da aplicação
    corecore