384 research outputs found

    Performance assessment and analysis of development and operations based automation tools for source code management

    Get PDF
    Development and operations (DevOps), an accretion of automation tools, efficiently reaches the goals of software development, test, release, and delivery in terms of optimization, speed and quality. Diverse set of alternative automation tools exist for different phases of software development, for which DevOps adopts several selection criteria to choose the best tool. This research paper represents the performance evaluation and analysis of automation tools employed in the coding phase of DevOps culture. We have taken most commonly followed source code management tools-BitBucket, GitHub actions, and GitLab into consideration. Current work assesses and analyzes their performance based on DevOps evaluation criteria that too are categorized into different dimensions. For the purpose of performance evaluation, weightage and overall score is assigned to these criteria based on existing renowned literature and industrial case study of TekMentors Pvt Ltd. On the ground of performance outcome, the tool with the highest overall score is realized as the best source code automation tool. This performance analysis or measure will be a great benefit to our young researchers/students to gain an understanding of the modus operandi of DevOps culture, particularly source code automation tools. As a part of future research, other dimensions of selection criteria can also be considered for evaluation purposes

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    Critical Success Factors of Continuous Practices in a DevOps Context

    Get PDF
    Context: Software companies try to achieve adaptive near to real-time software delivery and apply continuous practices in a DevOps context. While continuous practices may create new business opportunities, continuous practices also present new challenges. Objective: This study aims to aid in adopting continuous practices and performance improvements by increasing our understanding of these practices in a DevOps context. Method: By conducting a systematic literature review we identified critical success factors on continuous practices and grouped the found factors. This led to the construction of our initial framework. We started to validate the critical success factors in this framework in a DevOps context by conducting a first pilot interview. Results: We developed an initial framework of critical success factors and conducted a pilot interview to make a first step to validate the framework. Some factors were confirmed and clarified i.e., enriched, on the basis of the retrieved information. In future work we will strive at further validation of the framework. Conclusions: We took a first step to validate our framework and retrieved valuable information, which is promising to take the next steps for further development of the framework

    CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework

    Get PDF
    The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps.This research was funded by the European project PIACERE (Horizon 2020 Research and Innovation Programme, under grant agreement No. 101000162)

    Adopting DevOps practices: an enhanced unified theory of acceptance and use of technology framework

    Get PDF
    DevOps software development approach is widely used in the software engineering discipline. DevOps eliminates the development and operations department barriers. The paper aims to develop a conceptual model for adopting DevOps practices in software development organizations by extending the unified theory of acceptance and use of technology (UTAUT). The research also aims to determine the influencing factors of DevOps practices’ acceptance and adoption in software organizations, determine gaps in the software development literature, and introduce a clear picture of current technology acceptance and adoption research in the software industry. A comprehensive literature review clarifies how users accept and adopt new technologies and what leads to adopting DevOps practices in the software industry as the starting point for developing a conceptual framework for adopting DevOps in software organizations. The literature results have formulated the conceptual framework for adopting DevOps practices. The resulting model is expected to improve understanding of software organizations’ acceptance and adoption of DevOps practices. The research hypotheses must be tested to validate the model. Future work will include surveys and expert interviews for model enhancement and validation. This research fulfills the necessity to study how software organizations accept and adopt DevOps practices by enhancing UTAUT

    Experience report on the use of technology to manage capstone course projects

    Get PDF

    RLOps:Development Life-cycle of Reinforcement Learning Aided Open RAN

    Get PDF
    Radio access network (RAN) technologies continue to witness massive growth, with Open RAN gaining the most recent momentum. In the O-RAN specifications, the RAN intelligent controller (RIC) serves as an automation host. This article introduces principles for machine learning (ML), in particular, reinforcement learning (RL) relevant for the O-RAN stack. Furthermore, we review state-of-the-art research in wireless networks and cast it onto the RAN framework and the hierarchy of the O-RAN architecture. We provide a taxonomy of the challenges faced by ML/RL models throughout the development life-cycle: from the system specification to production deployment (data acquisition, model design, testing and management, etc.). To address the challenges, we integrate a set of existing MLOps principles with unique characteristics when RL agents are considered. This paper discusses a systematic life-cycle model development, testing and validation pipeline, termed: RLOps. We discuss all fundamental parts of RLOps, which include: model specification, development and distillation, production environment serving, operations monitoring, safety/security and data engineering platform. Based on these principles, we propose the best practices for RLOps to achieve an automated and reproducible model development process.Comment: 17 pages, 6 figrue

    eddy4R 0.2.0: a DevOps model for community-extensible processing and analysis of eddy-covariance data based on R, Git, Docker, and HDF5

    Get PDF
    Large differences in instrumentation, site setup, data format, and operating system stymie the adoption of a universal computational environment for processing and analyzing eddy-covariance (EC) data. This results in limited software applicability and extensibility in addition to often substantial inconsistencies in flux estimates. Addressing these concerns, this paper presents the systematic development of portable, reproducible, and extensible EC software achieved by adopting a development and systems operation (DevOps) approach. This software development model is used for the creation of the eddy4R family of EC code packages in the open-source R language for statistical computing. These packages are community developed, iterated via the Git distributed version control system, and wrapped into a portable and reproducible Docker filesystem that is independent of the underlying host operating system. The HDF5 hierarchical data format then provides a streamlined mechanism for highly compressed and fully self-documented data ingest and output. The usefulness of the DevOps approach was evaluated for three test applications. First, the resultant EC processing software was used to analyze standard flux tower data from the first EC instruments installed at a National Ecological Observatory (NEON) field site. Second, through an aircraft test application, we demonstrate the modular extensibility of eddy4R to analyze EC data from other platforms. Third, an intercomparison with commercial-grade software showed excellent agreement (R2  =  1.0 for CO2 flux). In conjunction with this study, a Docker image containing the first two eddy4R packages and an executable example workflow, as well as first NEON EC data products are released publicly. We conclude by describing the work remaining to arrive at the automated generation of science-grade EC fluxes and benefits to the science community at large. This software development model is applicable beyond EC and more generally builds the capacity to deploy complex algorithms developed by scientists in an efficient and scalable manner. In addition, modularity permits meeting project milestones while retaining extensibility with time

    Optimization and Prediction Techniques for Self-Healing and Self-Learning Applications in a Trustworthy Cloud Continuum

    Get PDF
    The current IT market is more and more dominated by the “cloud continuum”. In the “traditional” cloud, computing resources are typically homogeneous in order to facilitate economies of scale. In contrast, in edge computing, computational resources are widely diverse, commonly with scarce capacities and must be managed very efficiently due to battery constraints or other limitations. A combination of resources and services at the edge (edge computing), in the core (cloud computing), and along the data path (fog computing) is needed through a trusted cloud continuum. This requires novel solutions for the creation, optimization, management, and automatic operation of such infrastructure through new approaches such as infrastructure as code (IaC). In this paper, we analyze how artificial intelligence (AI)-based techniques and tools can enhance the operation of complex applications to support the broad and multi-stage heterogeneity of the infrastructural layer in the “computing continuum” through the enhancement of IaC optimization, IaC self-learning, and IaC self-healing. To this extent, the presented work proposes a set of tools, methods, and techniques for applications’ operators to seamlessly select, combine, configure, and adapt computation resources all along the data path and support the complete service lifecycle covering: (1) optimized distributed application deployment over heterogeneous computing resources; (2) monitoring of execution platforms in real time including continuous control and trust of the infrastructural services; (3) application deployment and adaptation while optimizing the execution; and (4) application self-recovery to avoid compromising situations that may lead to an unexpected failure.This research was funded by the European project PIACERE (Horizon 2020 research and innovation Program, under grant agreement no 101000162)
    • …
    corecore