28 research outputs found
Building a Kubernetes Operator to inegrate with CERN's DNS API
In the rapidly evolving digital landscape of today, effective management of DNS (Domain Name System) configurations has become a critical task for organizations of all sizes. One of the key challenges faced by these organizations is efficiently handling delegated domains, a task that requires automation and streamlining. To address this challenge, the landb-operator project was conceived. This project not only introduces a Kubernetes Operator but also offers a versatile command-line interface (CLI) tool, providing a comprehensive solution to the complexities associated with managing delegated domains.
DNS delegation stands as a pivotal component of modern network infrastructure, allowing organizations to assign responsibility for specific subdomains to different DNS servers. This process is indispensable for maintaining a coherent and organized web presence. In practice, organizations frequently find themselves tasked with overseeing a multitude of delegated domains, each with its unique set of records and configurations. These domains are distributed across various services and applications, and managing them can quickly become a formidable challenge.
The landb-operator project steps in to simplify and streamline this intricate domain management process within Kubernetes environments. Kubernetes, known for its robust container orchestration capabilities, serves as the ideal platform to house this innovative solution. The primary objective of this
project is to provide comprehensive toolset for managing delegated domains at CERN seamlessly. At the heart of this project is the concept of synchronization, ensuring that changes made within the Kubernetes cluster are accurately reflected in external DNS services. CERN’s DNS SOAP (Simple Object
Access Protocol) API is a notable example of an external DNS service that can be seamlessly integrated with the landb-operator. This integration is crucial for ensuring that DNS records and configurations remain consistent, regardless of where the changes originat
Solutions for non-web OAuth 2.0 authorisation at CERN
The need for Single Sign-On solutions in command line interfaces is not new to CERN. Different technologies have been introduced and internal solutions have been implemented to allow users to authenticate to remote servers or applications from their console interfaces. In the case of web services, the most common approach was to use cookie-based authentication, for which an internal tool was developed and made available for all the CERN user community. As the authorisation infrastructure evolved and started to fully support the OAuth 2.0 standard, as well as two-factor authentication (2FA), using the internal tool started to show its limitations. In this work, we present the past and present (OAuth-compliant) solutions, and compare them by looking at the advantages and disadvantages we have found. We also present a case study of a service, OpenShift, that implements this new authentication solution for their users
Inverted CERN School of Computing 2023
These days, the "cloud" is the default environment for deploying new applications.
Frequently cited benefits are lower cost, greater elasticity and less maintenance overhead.
However, for many people "using the cloud" means following obscure deployment steps that might seem like black magic.
This course aims to make newcomers familiar with cloud-native technology (building container images, deploying applications on Kubernetes etc.) as well as explain the fundamental concepts of the tech (microservices, separation of concerns and least privileges, fault tolerance).
In particular, the following topics of application development will be
covered:
BUILDING; writing applications in a cloud-native way (e.g. to work in an immutable environment) and creating container images according to best-practices;
DEPLOYING; using infrastructure-as-code to describe the application deployment (e.g. Helm charts) and using advanced features such as rolling updates and auto-scaling;
MONITORING; after multiple containers have been deployed, it is important to keep track of their status and the interaction between the services
IT Lightning Talks: session #24
Cloud infrastructures tends to have lots of moving pieces: containers, loadbalancers, virtual machines, databases etc.
Steampipe is a tool that allows querying all these pieces through a single interface with SQL.
Are you tired of writing brittle Bash and JQ scripts? Then this is the tool for you
DevConf.CZ 2024
CERN, the European Organization for Nuclear Research, is one of the world's largest centres for scientific research. Not only is it home to the world's largest particle accelerator (Large Hadron Collider, LHC), but it also the birthplace of the Web in 1989. Since 2016, CERN has been using the OpenShift Kubernetes Distribution to host a private platform-as-a-service (PaaS). This service is optimized for hosting web applications and has grown to tens of thousands of individual websites. By now, we have established on a reliable framework that deals with various use cases: thousands of websites per ingress controller (8K+ routes), dealing with long-lived connections (30K+ concurrent sessions) and high traffic applications (25TB+ per day). This session will discuss: * CERN's web hosting infrastructure based on OpenShift Kubernetes clusters; * usage of open source and in-house developed software for providing a seamless user experience; * integrations for registering hostnames (local DNS, LanDB, external) * provisioning of certificates (automatic with external-dns / ACME HTTP-01, manual provisioning) * access control policies and "connecting" different components with OpenPolicyAgent * enforcing unique hostnames across multiple Kuberenetes clustes * strategies for setting up Kubernetes Ingress Controllers for multi-tenant clusters; * methods for scaling and sharding ingress controllers according to the application's requirements (specifically HAProxy ingress controllers)
IT Lightning Talks: session #21
Cloud-native resources are great because they are so dynamic, but deploying a complex set of microservices and cloud integrations can also take its time.
This is why it's important to have unit tests for our cloud-native stack which can be run quickly on every developer machine.
This talk will present two such tools: container-structure-test and helm-unittest
IT Lightning Talks: session #22
This talk will present two tools which can be used in regular browsers (i.e. Firefox and Chrome) to make the web browsing experience faster and less annoying.
Vimium allows navigating the browser entirely with keyboard shortcuts. Atomic Chrome allows editing text-fields with a native editor (e.g. Emacs)
Dimensioning, Performance and Optimization of Cloud-native Applications
Cloud computing and software containers have seen major adoption over the last decade.
Due to this, several container orchestration platforms were developed, with Kubernetes gaining a majority of the market share.
Applications running on Kubernetes are often developed according to the microservice architecture.
This means that applications are split into loosely coupled services that are distributed across many servers. The distributed nature of this architecture poses significant challenges for the observability of application performance.
We investigate how such a cloud-native application can be monitored and dimensioned to ensure smooth operation. Specifically, we demonstrate this work based on the concrete example of an enterprise-grade application in the telecommunications context. Finally, we explore autoscaling for performance and cost optimization in Kubernetes i.e., automatically adjusting the amount of allocated resources based on the application load. Our results show that the elasticity obtained through autoscaling improves performance and reduces costs compared to static dimensioning.
Moreover, we perform a survey of research proposals for novel Kubernetes autoscalers. The evaluation of these autoscalers shows that there is a significant gap between the available research and usage in the industry. We propose a modular autoscaling component for Kubernetes to bridge this gap