1,296 research outputs found

    A DevOps approach to integration of software components in an EU research project

    Get PDF
    We present a description of the development and deployment infrastructure being created to support the integration effort of HARNESS, an EU FP7 project. HARNESS is a multi-partner research project intended to bring the power of heterogeneous resources to the cloud. It consists of a number of different services and technologies that interact with the OpenStack cloud computing platform at various levels. Many of these components are being developed independently by different teams at different locations across Europe, and keeping the work fully integrated is a challenge. We use a combination of Vagrant based virtual machines, Docker containers, and Ansible playbooks to provide a consistent and up-to-date environment to each developer. The same playbooks used to configure local virtual machines are also used to manage a static testbed with heterogeneous compute and storage devices, and to automate ephemeral larger-scale deployments to Grid5000. Access to internal projects is managed by GitLab, and automated testing of services within Docker-based environments and integrated deployments within virtual-machines is provided by Buildbot

    A study on performance measures for auto-scaling CPU-intensive containerized applications

    Get PDF
    Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values. The former account for the actual utilization of resources in the host system, while the latter account for the share that each container has of the resources used. Then, the performance of a variant of Kubernetes’ auto-scaling algorithm, that transparently uses the absolute usage measures to scale-in/out containers, is evaluated through a wide set of experiments. Finally, a detailed analysis of the state-of-the-art is presented

    On component-oriented access control in lightweight virtualized server environments

    Get PDF
    2017 Fall.Includes bibliographical references.With the advancements in contemporary multi-core CPU architectures and increase in main memory capacity, it is now possible for a server operating system (OS), such as Linux, to handle a large number of concurrent services on a single server instance. Individual components of such services may run in different isolated runtime environments, such as chrooted jails or related forms of OS-level containers, and may need restricted access to system resources and the ability to share data and coordinate with each other in a regulated and secure manner. In this dissertation we describe our work on the access control framework for policy formulation, management, and enforcement that allows access to OS resources and also permits controlled data sharing and coordination for service components running in disjoint containerized environments within a single Linux OS server instance. The framework consists of two models and the policy formulation is based on the concept of policy classes for ease of administration and enforcement. The policy classes are managed and enforced through a Lightweight Policy Machine for Linux (LPM) that acts as the centralized reference monitor and provides a uniform interface for regulating access to system resources and requesting data and control objects. We present the details of our framework and also discuss the preliminary implementation and evaluation to demonstrate the feasibility of our approach

    Application for managing container-based software development environments

    Get PDF
    Abstract. Virtualizing the software development process can enhance efficiency through unified, remotely managed environments. Docker containers, a popular technology in software development, are widely used for application testing and deployment. This thesis examines the use of containers as cloud-based development environments. This study explores the history and implementation of container-based virtualization before presenting containers as a novel cloud-based software development environment. Virtual containers, like virtual machines, have been extensively used in software development for code testing but not as development environments. Containers are also prevalent in the final stages of software production, specifically in the distribution and deployment of completed applications. In the practical part of the thesis, an application is implemented to improve the usability of a container-based development environment, addressing challenges in adopting new work environments. The work was conducted for a private company, and multiple experts provided input. The management application enhanced the container-based development environment’s efficiency by improving user rights management, virtual container management, and user interface. Additionally, the new management tools reduced training time for new employees by 50%, facilitating their integration into the organization. Container-based development environments with efficient management tools provide a secure, efficient, and unified platform for large-scale software development. Virtual containers also hold potential for future improvements in energy-saving strategies and organizational work method harmonization and integration.Sovellus konttipohjaisten ohjelmistonkehitysympäristöjen hallintaan. Tiivistelmä. Ohjelmistokehitysprosessin virtualisointi voi parantaa tehokkuutta yhtenäisten, etähallittujen ympäristöjen avulla. Ohjelmistonkehityksessä suosittu ohjelmistonkehitysteknologia, Docker-kontteja käytetään laajalti sovellusten testaamisessa ja käyttöönotossa. Tässä opinnäytetyössä tarkastellaan konttien käyttöä pilvipohjaisina kehitysympäristöinä. Tämä tutkimus tutkii konttipohjaisen virtualisoinnin historiaa ja toteutusta, jonka jälkeen esitellään konttien käyttöä uudenlaisena pilvipohjaisena ohjelmistokehitysympäristönä. Virtuaalisia kontteja, kuten virtuaalikoneita, on käytetty laajasti ohjelmistokehityksessä kooditestauksessa, mutta ei kehitysympäristöinä. Kontit ovat myös yleisiä ohjelmistotuotannon loppuvaiheissa, erityisesti valmiiden sovellusten jakelussa ja käyttöönotossa. Opinnäytetyön käytännön osassa toteutetaan konttipohjaisen kehitysympäristön käytettävyyttä parantava sovellus, joka vastaa uusien työympäristöjen käyttöönoton haasteisiin. Työ suoritettiin yksityiselle yritykselle, ja sen suunnitteluun osallistui useita asiantuntijoita. Hallintasovellus lisäsi konttipohjaisen kehitysympäristön tehokkuutta parantamalla käyttäjäoikeuksien hallintaa, virtuaalisen kontin hallintaa ja käyttöliittymää. Lisäksi uudet hallintatyökalut lyhensivät uusien työntekijöiden koulutusaikaa 50%, mikä helpotti heidän integroitumistaan organisaatioon. Säiliöpohjaiset kehitysympäristöt varustettuina tehokkailla hallintatyökaluilla tarjoavat turvallisen, tehokkaan ja yhtenäisen alustan laajamittaiseen ohjelmistokehitykseen. Virtuaalisissa konteissa on myös potentiaalia tulevaisuuden parannuksiin energiansäästöstrategioissa ja organisaation työmenetelmien harmonisoinnissa ja integroinnissa

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    Performance Analysis of Microservices Behavior in Cloud vs Containerized Domain based on CPU Utilization

    Get PDF
    Enterprise application development is rapidly moving towards a microservices-based approach. Microservices development makes application deployment more reliable and responsive based on their architecture and the way of deployment. Still, the performance of microservices is different in all environments based on resources provided by the respective cloud and services provided in the backend such as auto-scaling, load balancer, and multiple monitoring parameters. So, it is strenuous to identify Scaling and monitoring of microservice-based applications are quick as compared to monolithic applications [1]. In this paper, we deployed microservice applications in cloud and containerized environments to analyze their CPU utilization over multiple network input requests. Monolithic applications are tightly coupled while microservices applications are loosely coupled which help the API gateway to easily interact with each service module. With reference to monitoring parameters, CPU utilization is 23 percent in cloud environment. Additionally, we deployed the equivalent microservice in a containerized environment with extended resources to minimize CPU utilization to 17 percent. Furthermore, we have shown the performance of the application with “Network IN” and “Network Out” requests

    KuberneTSN: a Deterministic Overlay Network for Time-Sensitive Containerized Environments

    Full text link
    The emerging paradigm of resource disaggregation enables the deployment of cloud-like services across a pool of physical and virtualized resources, interconnected using a network fabric. This design embodies several benefits in terms of resource efficiency and cost-effectiveness, service elasticity and adaptability, etc. Application domains benefiting from such a trend include cyber-physical systems (CPS), tactile internet, 5G networks and beyond, or mixed reality applications, all generally embodying heterogeneous Quality of Service (QoS) requirements. In this context, a key enabling factor to fully support those mixed-criticality scenarios will be the network and the system-level support for time-sensitive communication. Although a lot of work has been conducted on devising efficient orchestration and CPU scheduling strategies, the networking aspects of performance-critical components remain largely unstudied. Bridging this gap, we propose KuberneTSN, an original solution built on the Kubernetes platform, providing support for time-sensitive traffic to unmodified application binaries. We define an architecture for an accelerated and deterministic overlay network, which includes kernel-bypassing networking features as well as a novel userspace packet scheduler compliant with the Time-Sensitive Networking (TSN) standard. The solution is implemented as tsn-cni, a Kubernetes network plugin that can coexist alongside popular alternatives. To assess the validity of the approach, we conduct an experimental analysis on a real distributed testbed, demonstrating that KuberneTSN enables applications to easily meet deterministic deadlines, provides the same guarantees of bare-metal deployments, and outperforms overlay networks built using the Flannel plugin.Comment: 6 page

    A DEVSECOPS APPROACH FOR DEVELOPING AND DEPLOYING CONTAINERIZED CLOUD-BASED SOFTWARE ON SUBMARINES

    Get PDF
    There are unique challenges for using secure cloud services in disconnected resource-constrained environments and with controlled data. To address those challenges, this thesis introduces a tactical-edge platform-as-a-service (PaaS) solution with a declarative-delivery method for submarine Consolidated Afloat Network Enterprise Services (CANES) operating systems. The PaaS is adapted from the Department of Defense’s Big Bang core elements for submarine-focused outcomes. Using the Team Submarine Project Blue initiative as a case study, this thesis consists of a feasibility study for running containerized applications on different submarine-compatible baselines and applying a prototype declarative software-delivery method called ZARF. We demonstrated the feasibility of using ZARF for packaging and automated deployment of the Project Blue PaaS and its software to the submarine CANES infrastructure. This research culminated in successful integration tests on a current and future submarine hardware and software baseline. The thesis documents the execution of the research, lessons learned, and recommendations for the Navy’s path forward for development of secure software and declarative deployment in air-gapped environments.Outstanding ThesisLieutenant, United States NavyApproved for public release. Distribution is unlimited
    corecore