289 research outputs found

    PREvant (Preview Servant): Composing Microservices into Reviewable and Testable Applications

    Get PDF

    GitOps for Configuration Drift Management In Kubernetes Environments

    Get PDF
    Modern software development and deployment is intricately intertwined with the paradigm shift towards containerization, orchestration, and cloud computing. The management of containerized applications in dynamic and distributed environments becomes both a critical necessity and a formidable challenge as organizations increasingly migrate towards microservices architectures.Within this the phenomenon of configuration drift stands out as a pervasive issue. Wherein the actual state of deployed applications diverges unintentionally from the intended state. Thus leading to operational inefficiencies that may impact security and incur compromised system reliability. GitOps emerges as a novel approach to handle this issue. Utilizing the principle of “single source of truth”, a principle which posits that all parts of the infrastructure should be organized from a single reference point. Which amounts to a Git repository in this instance. This research sat out to explore GitOps for configuration management within the context of a cloud provider utilizing Kubernetes and Prometheus. With a specific focus on detection, and remedy of issues posed by configuration drift in containerized applications. Our conclusion were that GitOps presents notable advantages in deployment of configuration changes and remedy of configuration drift. With nearly a 50% reduction in speed to deploy and remedy configuration drift when dealing with misconfigured changes and application dependency update

    Comparative Analysis of GitOps Tools and Frameworks

    Get PDF
    This paper presents an in-depth assessment of four notable GitOps tools: Argo CD, Flux, Jenkins X, and Weaveworks. GitOps is a methodology used for the uninterrupted delivery of cloud-native applications, facilitating the seamless encapsulation of infrastructure as code. The study presents these assessments based on key effectiveness indices, including performance, scalability, integration, usability, and security. It contains benchmark tests to demonstrate the applicability of each tool in various multi-cloud and hybrid-cloud scenarios, as well as other realistic settings.Furthermore, the paper examines the security aspect of these tools and their relevance as one of the components of DevSecOps. The book also presents case studies that show how organisations have used these tools, highlighting both the benefits and drawbacks of their application. The result presents a matrix for decision-making for organisations that wish to implement the GitOps mode of operation within their DevOps workflows in both small and large organisational contexts. This section examines the prospects of GitOps and explains its necessity in the context of emerging developments in cloud-native development, with special emphasis on scalability and security issues

    IMPLEMENTATION OF GITOPS IN CONTAINERIZED INFRASTRUCTURE

    Get PDF
    IT Infrastructure is one of the core components of a business’ scalability and reliability, thus having an efficient way on managing the IT Infrastructure would be one of the core decisions for a business. IT Infrastructure itself has evolved throughout the years, with the rise of virtualization technology, containerization became more relevant as ever, one such example is Containerization Infrastructure, an IT Infrastructure that uses Containerization as its backbone. With the push of the technology, a new way of managing Containerization Infrastructure efficiently is needed. There are multiple researches regarding the implementation of GitOps already, but none of them explained the connection between GitOps and Containerized Infrastructure, this paper is intended to discuss the connection by implementing GitOps in a Containerized infrastructure. This resulted in a quite steep learning curve and preparation time, but in the end all the changes and deployment of the application would be done automatically, this resulted in maximized focus on the development of the application rather than reflecting the changes later on. Other than that, GitOps itself is not limited to Containerized Infrastructure, although since GitOps is designed with virtualization in mind, theoretically, the efficiency would be reduced if it’s implemented in other kind of IT Infrastructur

    Pipeline to Production: Modern CI/CD Strategies with Docker, Kubernetes, and Cloud-Native Tooling

    Get PDF
    This article explores modern Continuous Integration and Continuous Delivery (CI/CD) practices from a technical and architectural standpoint, focusing on the role of Docker, Kubernetes, and cloud-native tools in streamlining software delivery pipelines. It outlines best practices for containerizing applications, automating build processes with tools such as Jenkins and GitHub Actions, and deploying microservices using Helm charts and Kubernetes manifests. Core challenges—including environment parity, secrets management, and rollback strategies—are critically analyzed. The paper also investigates emerging solutions like GitOps workflows, Infrastructure-as-Code (IaC), and service mesh integrations that enhance scalability, observability, and resilience. Through real-world deployment scenarios across major cloud providers including AWS, Azure, and Google Cloud Platform (GCP), the article offers DevOps engineers, SREs, and cloud architects a practical guide to building robust, secure, and automated production pipelines

    Kubernetes Deployment Options for On-Prem Clusters

    Full text link
    Over the last decade, the Kubernetes container orchestration platform has become essential to many scientific workflows. Despite its popularity, deploying a production-ready Kubernetes cluster on-premises can be challenging for system administrators. Many of the proprietary integrations that application developers take for granted in commercial cloud environments must be replaced with alternatives when deployed locally. This article will compare three popular deployment strategies for sites deploying Kubernetes on-premise: Kubeadm with Kubespray, OpenShift / OKD and Rancher via K3S/RKE2

    A Survey on Infrastructure-as-Code Solutions for Cloud Development

    Get PDF
    Cloud software is increasingly written according to the DevOps paradigm, where use of virtualization and Infrastructure-as-Code is prevalent. This paper surveys the state of the art of IaC cloud development, and proposes a combination of Cloud-Native software to build an on-premise PaaS for a Security Lab.acceptedVersio

    Enhancing Kubernetes-Based Microservices Deployment Efficiency Through DevOps and GitOps

    Get PDF
    An effective and resilient means to deploy microservices to Kubernetes is an ongoing challenge. This challenge becomes more difficult with ever increasingly complex application architectures. This research explored a DevOps model based on GitOps that integrates ArgoCD and GitLab CI/CD, as a means to create a more effective, resilient, and scalable deployment. Twelve microservices that were deployed in a controlled experimentation format were used in a comparative approach to previous deployment practices that only considered manual deployments. The results show an overall deployment time improvement of 40%. For the deployments that were executed incorrectly, ArgoCD ensures service availability leveraging its self-healing capabilities. During the computation of each run we also experienced system performance in a sustained high-load environment. Upon high demand, we experienced the desired autoscaling behavior requested, which resulted in higher service responsiveness. In comparison to previous studies, this research considered statistical analysis, while also looking at an aspect of real-world orchestration and networking efficiency while adopting Kubernetes. Altogether, this research gives organizations practical advice on how they may optimize their deployment pipelines for efficient, scalable and resilient microservices

    Automating the Deployment of a Microservices Application Using GitOps framework

    Get PDF
    Dnes většina podniků provozuje mikroslužby na Kubernetes, přesto často zápasí s údržbou více verzí služeb napříč různými prostředími. DevOps inženýři hledají komplexní automatizované řešení pro zřizování clusterů i jejich konfiguraci. Tato práce představuje rámec založený na principech GitOps, který integruje postupy DevSecOps pro zajištění bezpečnosti a souladu s předpisy, a tím zjednodušuje pracovní postupy vývojářů i operačních týmů.Today, most enterprises host microservices on Kubernetes, yet they often struggle to maintain multiple service versions across different environments. DevOps engineers are seeking an end-to-end automated solution for both cluster provisioning and configuration. This thesis presents a framework based on GitOps principles that integrates DevSecOps practices for security and compliance, thereby simplifying workflows for developers and operations teams

    Creating a Scalable Log Analytics Pipeline with GitOps

    Get PDF
    Norges Teknisk-Naturvitenskapelige Universitet (NTNU) sin SOC ser økende behov for observasjonsevne og logganalyse for å effektivt håndtere hendelser og overvåke driften av infrastrukturen de beskytter. Dette er spesielt viktig i kontainermiljøer, hvor en kontainer kan være avsluttet når en hendelse oppdages, noe som gjør logganalyse essensielt for å forstå hva som har skjedd. For å forbedre sin sin støtte for cloud-native logging, foreslo NTNU SOC å bygge en proof-of-concept (POC) logganalyse-pipeline. Prosjektet fokuserer på å designe, implementere og evaluere en slik pipeline, med bruk av åpen kildekode-verktøy som OpenSearch, Apache Kafka og Vector. I tillegg skulle prosjektet bygge på IAC-verktøy som Terraform og Ansible for automatisert provisjonering av infrastrukturen i SkyHiGh, NTNU sin implementasjon av Openstack-skyen. Projektet har resultert i en fungerende logganalyse-pipeline, som demonstrerer hvordan man kan håndtere store mengder data fra ulike kilder i et cloud-native miljø. Pipelinen er definert som kode, noe som sikrer sporbarhet og reproduserbarhet, og er implementert ved hjelp av GitOps-metodikk. Gjennom testing av pipelinen ble dens funksjonalitet og skalerbarhet evaluert. Vi oppnådde en fungerende referanse arkitektur for en skalerbar logg innhenting og prosesserings-infrastruktur, som vil fungere som et utgangspunkt for videre arbeid. Imidlertidig avdekket testingen også ytelsesutfordringer med OpenSearch under høy belastning. Dette indikerer behov for videre optimalisering av OpenSearch-konfigurasjonen for å håndtere store datamengder effektivt.The Norwegian University of Science and Technology (NTNU) SOC sees a growing need for observation capabilities and log analysis to effectively handle incidents and monitor the operation of the infrastructure they protect. This is especially important in container environments, where a container may have terminated when an event is detected, making log analysis essential to understanding what has happened. To improve its support for cloud-native logging, NTNU SOC proposed to build a proof-of-concept (POC) log analysis pipeline. The project focuses on designing, implementing and evaluating such a pipeline, using open-source tools such as OpenSearch, Apache Kafka and Vector. In addition, the project was to build on IAC tools such as Terraform and Ansible for automated provisioning of the infrastructure in SkyHiGh, NTNU's implementation of the Openstack cloud. The project has resulted in a working log analysis pipeline, which demonstrates how to handle large amounts of data from various sources in a cloud-native environment. The pipeline is defined as code, which ensures traceability and reproducibility, and is implemented using GitOps methodology. Through testing the pipeline, its functionality and scalability were evaluated. We achieved a working reference architecture for a scalable log acquisition and processing infrastructure, which will serve as a starting point for further work. However, the testing also revealed performance challenges with OpenSearch under high load. This indicates a need for further optimization of the OpenSearch configuration to handle large amounts of data efficiently
    corecore