322 research outputs found

    MicroFog: A Framework for Scalable Placement of Microservices-based IoT Applications in Federated Fog Environments

    Full text link
    MicroService Architecture (MSA) is gaining rapid popularity for developing large-scale IoT applications for deployment within distributed and resource-constrained Fog computing environments. As a cloud-native application architecture, the true power of microservices comes from their loosely coupled, independently deployable and scalable nature, enabling distributed placement and dynamic composition across federated Fog and Cloud clusters. Thus, it is necessary to develop novel microservice placement algorithms that utilise these microservice characteristics to improve the performance of the applications. However, existing Fog computing frameworks lack support for integrating such placement policies due to their shortcomings in multiple areas, including MSA application placement and deployment across multi-fog multi-cloud environments, dynamic microservice composition across multiple distributed clusters, scalability of the framework, support for deploying heterogeneous microservice applications, etc. To this end, we design and implement MicroFog, a Fog computing framework providing a scalable, easy-to-configure control engine that executes placement algorithms and deploys applications across federated Fog environments. Furthermore, MicroFog provides a sufficient abstraction over container orchestration and dynamic microservice composition. The framework is evaluated using multiple use cases. The results demonstrate that MicroFog is a scalable, extensible and easy-to-configure framework that can integrate and evaluate novel placement policies for deploying microservice-based applications within multi-fog multi-cloud environments. We integrate multiple microservice placement policies to demonstrate MicroFog's ability to support horizontally scaled placement, thus reducing the application service response time up to 54%

    KUBERNETES CLUSTER MANAGEMENT FOR CLOUD COMPUTING PLATFORM: A SYSTEMATIC LITERATURE REVIEW

    Get PDF
    Kubernetes is designed to automate the deployment, scaling, and operation of containerized applications. With the scalability feature of Kubernetes technology, container automation processes can be implemented according to the number of concurrent users accessing them. Therefore, this research focuses on how Kubernetes as cluster management is implemented on several cloud computing platforms. Standard literature review method employing a manual search for several journals and conference proceedings. From 15 relevant studies, 5 addressed Kubernetes performance and scalability. Seven literature review addressed Kubernetes deployments. Two articles addressed Kubernetes comparison and the rest is addressed Kubernetes in IoT. Regarding the cloud computing cluster management challenges that must be overcome using Kubernetes: it is necessary to ensure that all configuration and management required for Docker containers are successfully set up on on-premises systems before deploying to the cloud or on-premises. Data from Kubernetes deployments can be leveraged to support capacity planning and design Kubernetes-based elastic applications

    Explorar kubernetes e devOps num contexto de IoT

    Get PDF
    Containerized solutions and container orchestration technologies have recently been of great interest to organizations as a way of accelerating both software development and delivery processes. However, adopting these is a rather complex shift that may impact an organization and teams that were already established. This is where development cultures such as DevOps emerge to ease such shift amongst teams, promoting collaboration and automation of development and deployment processes throughout. The purpose of the current dissertation is to illustrate the path that led to the use of DevOps and containerization as means to support the development and deployment of a proof of concept system, Firefighter Sync – an Internet of Things based solution applied to a firefighting monitoring scenario. The goal, besides implementing Firefighter Sync, was to propose and deploy a development and operations ecosystem based on DevOps practices to achieve a full automation pipeline for both the development and operations processes. Firefighter Sync enabled the exploration of such state-of-the-art solutions such as Kubernetes to support container-based deployment and Jenkins for a fully automated CI/CD pipeline. Firefighter Sync clearly illustrates that addressing the development of a system from a DevOps perspective from the very beginning, although it requires an accentuated learning curve due to the large range of concepts and technologies addressed throughout, has illustrated to effectively impact the development process as well as ease the solution for future evolution. A good example is the automation process pipeline, that whilst allowing an easy integration of new features within a DevOps process – implies addressing the development and operations as a whole – it abstracts specific technological concerns turning these transversals to the traditional stages from development to deployment.Soluções de contentores e orquestração de contentores têm vindo a tornar-se de grande interesse para as organizações como uma forma de acelerar os processos de desenvolvimento e entrega de software. No entanto, adotá-las é uma mudança bastante complexa que pode impactar uma organização e equipas já estabelecidas. É aqui que surgem culturas como o DevOps para facilitar essa mudança, promovendo a colaboração e a automação dos processos de desenvolvimento e deployment entre equipas. O objetivo desta dissertação é ilustrar o caminho que levou ao uso de DevOps e à conteinerização de modo a apoiar o desenvolvimento e o deployment de um sistema como prova de conceito, o Firefighter Sync – uma solução baseada na Internet das Coisas aplicada a um cenário de monitorização de combate a incêndios. Além de implementar o Firefighter Sync, o objetivo era também propor e implementar um ecossistema de desenvolvimento e operações com base nas práticas de DevOps para alcançar uma pipeline de automação completa para os processos de desenvolvimento e operações. O Firefighter Sync permitiu explorar soluções que constituem o estado da arte neste contexto, como o Kubernetes para apoiar o deployment baseado em contentores e o Jenkins para suportar a pipeline de CI/CD totalmente automatizada. O Firefighter Sync ilustra claramente que abordar o desenvolvimento de um sistema a partir da perspectiva de DevOps, embora exija uma curva de aprendizagem acentuada devido à grande variedade de conceitos e tecnologias inerentes ao longo do processo, demonstrou tornar mais eficiente o processo de desenvolvimento, bem como facilitar evolução futura. Um exemplo é a pipeline de automação, que permite uma fácil integração de novos recursos dentro de um processo de DevOps – que implica abordar o desenvolvimento e as operações como um todo – abstraindo assim preocupações tecnológicas específicas, transformando essas transversais nas fases tradicionais do desenvolvimento ao deployment.Mestrado em Engenharia Informátic

    Enabling 5G Edge Native Applications

    Get PDF

    Go microservices runtime optimization in Kubernetes environment: the importance of garbage collection tuning

    Get PDF
    Many modern web applications are structured with a microservices architecture to allow easier maintenance and greater horizontal scalability. A language that focuses on this, on parallelism and concurrency is Go, which through some interesting abstractions puts these aspects at the center. In order to facilitate the deployment and orchestration of a microservices architecture based on containers, Kubernetes is often used, with the task of simply managing all the services and their connection. The aim of this work is therefore to study a benchmark application, developed as microservices in Go, and to analyze its performance when some parameters change, both in the language runtime and in the Kubernetes environment. Particular attention is paid to the definition and collection of some metrics, both of the language runtime, and of the environment in which it is executed, so the various resources used by the containers and the overall resources of the node where the application is executed. The work was carried out as an internship in Akamas, a company that develops an application aimed at automating and speeding up this parameter tuning process in search of the best configuration that can optimize the defined objectives and meet the required constraints
    • …
    corecore