11 research outputs found

    2018 CentOS Dojo / RDO day at CERN

    No full text

    OpenStack User Group France

    No full text
    A review of the current status of container deployment on the CERN cloud using OpenStack Magnum

    Optimizing OpenStack Nova for scientific workloads

    No full text
    The CERN OpenStack cloud provides over 300,000 CPU cores to run data processing analyses for the Large Hadron Collider (LHC) experiments. To deliver these services, with high performance and reliable service levels, while at the same time ensuring a continuous high resource utilization has been one of the major challenges for the CERN cloud engineering team. Several optimizations like NUMA-aware scheduling and huge pages, have been deployed to improve scientific workloads performance, but the CERN Cloud team continues to explore new possibilities like preemptible instances and containers on bare-metal. In this paper we will dive into the concept and implementation challenges of preemptible instances and containers on bare-metal for scientific workloads. We will also explore how they can improve scientific workloads throughput and infrastructure resource utilization. We will present the ongoing collaboration with the Square Kilometer Array (SKA) community to develop the necessary upstream enhancement to further improve OpenStack Nova to support large-scale scientific workloads

    Containers and Orchestration with Docker and Kubernetes

    No full text
    Abstract: Containers have taken an important role in existing IT configuration and deployment stacks. In the first part of this lecture we will go through the internals of what a container is made of, why they work well as a basic building block for complex applications, and cover cgroups and the different linux namespaces available as the key features they rely on. &nbsp; The second part will focus on orchestrating complex distributed applications using Kubernetes, which in the last few years has built a large community around it. We will start with how to define and manage the deployment of a multi-tier application and go through how Kubernetes handles replication, restarts, log collection and other key features. We will finish briefly covering more advanced features such as ingress, pod and cluster auto scaling and monitoring.</p

    Container Hands-On

    No full text
    This session will be done in a training, hands-on&nbsp;format, covering topics related to containers. Topics include: Container basics (usage, underlying technologies) Image management and optimizations Container clusters and orchestration Containerized application lifecycle with Helm Autodevops It is recommended to bring your own laptop to follow the different&nbsp;exercises. Please make sure you have docker (&gt;17.09) installed on your laptop, you&nbsp;can login to lxplus-cloud.cern.ch and that you have enough quota on your personal OpenStack project - minimum 2 instances, 4 cores. HANDS ON:&nbsp;http://clouddocs.web.cern.ch/clouddocs/aviator/docs/index.html LIVE NOTES:&nbsp;https://hackmd.web.cern.ch/s/B1YFyMl5M UPDATE: We're providing an alternative way to get hold of the necessary clients and libraries. You can simply launch a VM in your OpenStack Personal Project, as follows: # ssh lxplus7-cloud-testing # openstack server create --image 2ddd02c9-68df-4507-b329-9dec05635543 --flavor m2.medium --key-name &lt;YOURKEYPAIR&gt; &lt;YOURUSERID&gt;-handson The last part of the training will involve some examples running on Kubernetes. Please try to create a cluster in advance, issuing the following command: # ssh lxplus7-cloud-testing # openstack coe cluster create --cluster-template kubernetes --node-count 1 --keypair &lt;YOURKEYPAIR&gt; &lt;YOURUSERID&gt;-handson-kub If you need any help prior to the session contact one of the organizers.</p

    Integrating containers in the CERN private cloud

    No full text
    Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs. The CERN private cloud is made of thousands of nodes and users, with many different use cases. A single solution for container deployment would not cover every one of them, and supporting multiple solutions involves repeating the same process multiple times for integration with authentication services, storage services or networking. In this paper we describe OpenStack Magnum as the solution to offer container management in the CERN cloud. We will cover its main functionality and some advanced use cases using Docker Swarm and Kubernetes, highlighting some relevant differences between the two. We will describe the most common use cases in HEP and how we integrated popular services like CVMFS or AFS in the most transparent way possible, along with some limitations found. Finally we will look into ongoing work on advanced scheduling for both Swarm and Kubernetes, support for running batch like workloads and integration of container networking technologies with the CERN infrastructure

    Evaluation and Implementation of Various Persistent Storage Options for CMSWEB Services in Kubernetes Infrastructure at CERN

    No full text
    This paper summarizes the various storage options that we implemented for the CMSWEB cluster in Kubernetes infrastructure. All CMSWEB services require storage for logs, while some services also require storage for data. We also provide a feasibility analysis of various storage options and describe the pros/cons of each technique from the perspective of the CMSWEB cluster and its users. In the end, we also propose recommendations according to the service needs. The first option is the CephFS which can be mounted multiple times across various clusters and VMs and works very well with k8s. We use it both for data and the logs. The second option is the Cinder volume. It is the block storage that runs the filesystem on top of it. It can only be attached to one instance at a time. We use this option only for the data. The third option is S3 storage. It is object storage that offers a scalable storage service that can be used by applications compatible with the Amazon S3 protocol. It is used for the logs. For S3, we explored two mechanisms. For the first scenario, we consider fluentd that runs as a sidecar container in the service pods and sends logs to S3 bucket. For the second scenario, we considered filebeat that runs as a sidecar container in the service pod and scaps those logs to fluentd which runs as a daemonset in each node and sends those logs to S3 in the end. The fourth option is EOS. We configured EOS inside the pods of the CMSWEB services. The fifth option that we explored is to use dedicated VMs that have Ceph volume attached to them. In EOS and VM, the logs from the service pods are sent to EOS/VM using the rsync approach. The last option is to send service logs to Elasticsearch. It has been implemented using fluentd that runs as a daemonset in each node. In parallel to the sending logs to S3 fluentd also sends those logs to the Elasticsearch infrastructure at CERN

    Implementation of CMSWEB Services Deployment Procedures using HELM

    No full text
    The Compact Muon Solenoid (CMS) experiment heavily relies on the CMSWEB cluster to host critical services for its operational needs. Recently, CMSWEB cluster has been migrated from the VM cluster to the Kubernetes (k8s) cluster. The new cluster of CMSWEB in Kubernetes enhances sustainability and reduces the operational cost. In this work, we added new features to the CMSWEB k8s cluster. The new features include the deployment of services using Helm's chart templates and the incorporation of canary releases using Nginx ingress weighted routing that is used to route traffic to multiple versions of the services simultaneously. The usage of Helm simplifies the deployment procedure and no expertise of Kubernetes are needed anymore for service deployment. Helm packages all dependencies, and services are easily deployed, updated and rolled back. Helm enables us to deploy multiple versions of the services to run simultaneously. This feature is very useful for developers to test the new versions of the services by assigning some weight to the new service version and rolling back immediately in case of issues. Using Helm, we can also deploy different application configurations at runtime

    Implementation of CMSWEB Services Deployment Procedures using HELM

    No full text
    The Compact Muon Solenoid (CMS) experiment heavily relies on the CMSWEB cluster to host critical services for its operational needs. Recently, CMSWEB cluster has been migrated from the VM cluster to the Kubernetes (k8s) cluster. The new cluster of CMSWEB in Kubernetes enhances sustainability and reduces the operational cost. In this work, we added new features to the CMSWEB k8s cluster. The new features include the deployment of services using Helm's chart templates and the incorporation of canary releases using Nginx ingress weighted routing that is used to route traffic to multiple versions of the services simultaneously. The usage of Helm simplifies the deployment procedure and no expertise of Kubernetes are needed anymore for service deployment. Helm packages all dependencies, and services are easily deployed, updated and rolled back. Helm enables us to deploy multiple versions of the services to run simultaneously. This feature is very useful for developers to test the new versions of the services by assigning some weight to the new service version and rolling back immediately in case of issues. Using Helm, we can also deploy different application configurations at runtime
    corecore