46 research outputs found

    Mining Software Repositories for Release Engineers - Empirical Studies on Integration and Infrastructures-as-Code

    Get PDF
    RÉSUMÉ Release engineering (Releng) est le processus de la mise en production logicielle des contributions des développeurs en un produit intégré livré aux utilisateurs. Ce processus consiste des phases d’intégration, de construction et des tests, de déploiement et de livraison, pour finalement entrer au marché. Alors que, traditionnellement, la mise en production prend plusieurs mois pour livrer un produit logiciel complet aux clients, la mise en production moderne vise à apporter de la valeur au client plus rapidement, dans l’ordre des jours ou des semaines, pour recevoir de rétroaction utile plus rapidement et pour minimiser le temps perdu sur des fonctionnalités échouées. De nos jours, une panoplie d’outils et de techniques a émergé pour soutenir la mise en production. Ils visent essentiellement à automatiser les phases dans le pipeline de la mise en production, ce qui réduit le travail manuel et rend le processus reproductible et rapide. Par exemple, Puppet est l’un des outils les plus populaires pour Infrastructure-as-Code (IaC), ce qui automatise le processus de mettre en place une nouvelle infrastructure (par exemple, une machine virtuelle ou un conteneur dans lequel une application peut être compilée, testée et déployée) selon des spécifications textuelles. IaC a évolué rapidement en raison de la croissance de l’infonuagique. Cependant, de nombreux problèmes existent encore pour la mise en production. Par exemple, alors que de nombreux outils de la mise en production gagnent en popularité, le choix de la technique la plus appropriée exige des praticiens d’évaluer empiriquement la performance des techniques dans un contexte réaliste, avec des données représentatives. Pire encore, à un niveau plus haut, les ingénieurs de la mise en production doivent analyser le progrès de l’organisation dans chaque phase de la mise en production, afin de savoir si la prochaine date de sortie peut être respectée ou si des obstacles se produisent. De nouveau, il n’y a pas de méthode cohérente et établie pour ce faire. Pour aider les praticiens à mieux analyser leur processus de la mise en production, nous explorons la façon selon laquelle la fouille de référentiels logiciels (Mining Software Repositories; MSR) est capable d’analyser le progrès d’une phase de la mise en production ou d’évaluer la performance d’un outil de mise en production. MSR agit sur les données stockées dans des référentiels de logiciels tels que les systèmes de gestion de versions, les référentiels de bogues ou des environnements de révision technique. Au lieu que les développeurs, les testeurs et les examinateurs utilisent ces référentiels juste pour enregistrer des données de développement (telles que les changements de code, rapports de bogues ou des révisions techniques), MSR rend ces données actionnables en les analysant. Par exemple, en faisant l’extraction de l’information des changements de code source et de rapports de bogues, on peut recréer l’ensemble du processus de développement d’un projet ou de la mise en production. De nos jours, de nombreux référentiels logiciels à source libre sont disponibles au public, offrant des possibilités pour l’analyse empirique de la mise en production en utilisant des technologies MSR. Dans cette thèse, on a fait des analyses MSR pour deux phases critiques de la mise en production, c.-à-d. l’intégration et le provisionnement de l’environnement d’un logiciel (avec IaC), sur plusieurs projets larges à source libre. Cette série d’études empiriques ciblait de comprendre le progrès du processus de la mise en production et d’évaluer la performance des outils de point. Nous nous sommes concentrés principalement sur ces deux phases parce qu’elles sont essentielles dans la mise en production, et un grand nombre de données est disponible pour elles. D’abord, nous avons constaté que la révision technique et l’intégration de changements de code sont impactées par de différents facteurs. Nos résultats suggèrent que les développeurs réussissent à faire passer leurs contributions à travers la révision technique plus rapidement en changeant moins de sous-systèmes à la fois et de diviser une grande contribution en plusieurs contributions plus petites. En outre, les développeurs peuvent faire accepter leurs contributions plus facilement et plus rapidement en participant davantage dans la communauté de développeurs à source libre et d’acquérir plus d’expérience dans des sous-systèmes similaires. Dans cette étude sur le noyau Linux, nous avons trouvé que l’un des défis majeurs de MSR dans le contexte de la mise en production logicielle est de relier les différents référentiels nécessaires. Par exemple, le noyau Linux ou le projet Apache HTTPD utilisent tous les deux des listes de diffusion pour effectuer le processus de révision technique. Les experts examinent des contributions par courriel, et un fil de courriels est utilisé pour ramasser toutes les différentes versions d’une contribution. Cependant, souvent un nouveau fil de discussion, avec sujet différent, est utilisé pour soumettre une nouvelle révision, ce qui signifie qu’aucun lien physique étroit n’existe entre toutes les révisions d’une contribution. En plus, les versions révisées d’une contribution non plus n’ont de lien physique avec la version acceptée dans le référentiel de gestion de versions, à moins qu’un identifiant de validation soit affiché dans un courriel. Surtout quand une contribution a été révisée plusieurs fois et a beaucoup évolué en comparaison avec la version initiale, le suivi à partir de sa toute première version est difficile à faire. Nous avons proposé trois approches de différente granularité et de rigueur différente, dans le but de récupérer les liens physiques entre les révisions de contributions dans le même fil de courriels. Dans notre étude, nous avons constaté que la technique au niveau des lignes individuelles fonctionne le mieux pour lier des contributions entre différents fils de courriels, tandis qu’une combinaison de cette approche avec celle à base de sommes de contrôle réalise la meilleure performance pour relier les contributions dans un fil de courriels avec la version finale dans le référentiel de gestion de versions. Être capable de reconstituer l’historique complet de contributions nous a permis d’analyser le progrès de la phase de révision du noyau Linux. Nous avons constaté que 25% des contributions acceptées prennent plus de quatre semaines pour leur révision technique. Deuxièmement, pour évaluer la capacité de MSR pour analyser la performance des outils de mise en production, nous avons évalué dans un projet commercial une approche d’intégration hybride qui combine les techniques de branchement et de “feature toggles”. Des branches permettent aux développeurs de travailler sur différentes fonctionnalités d’un système en parallèle, en isolation (sans impacter d’autres équipes), tandis qu’un feature toggle permet aux développeurs de travailler dans une branche sur différentes tâches en cachant des fonctionnalités sous développement avec des conditions “if” dans le code source. Au lieu de réviser leur processus d’intégration entièrement pour abandonner les branches et de passer aux feature toggles, l’approche hybride est un compromis qui tente de minimiser les risques des branches tout en profitant des avantages des feature toggles. Nous avons comparé la performance avant et après l’adoption de l’approche hybride, et avons constaté que cette structure hybride peut réduire l’effort d’intégration et améliorer la productivité. Par conséquent, l’approche hybride semble une pratique valable. Dans la phase de provisionnement, nous nous sommes concentrés sur l’évaluation de l’utilisation et de l’effort requis pour des outils populaires de “Infrastructure-as-Code” (IaC), qui permettent de spécifier les requis d’environnement dans un format textuel. Nous avons étudié empiriquement les outils IaC dans OpenStack et MediaWiki, deux projets énormément larges qui ont adopté deux des langues IaC actuellement les plus populaires: Puppet et Chef. Tout d’abord, nous avons comparé l’effort de maintenance lié à IaC avec celui du codage et des tests. Nous avons constaté que le code IaC prend une partie importante du système dans les deux projets et change fréquemment, avec de grands changements de code. Les changements de code IaC sont étroitement couplés avec les changements de code source, ce qui implique que les changements de code source ou des tests nécessitent des changements complémentaires au code source IaC, et pourrait causer un effort plus large de maintenance et de gestion de complexité. Cependant, nous avons également observé un couplage léger avec des cas de test IaC et les données de provisionnement, qui sont de nouveaux types d’artéfacts dans le domaine de IaC. Par conséquent, IaC peut nécessiter plus d’effort que les ingénieurs expectent. D’autres études empiriques devraient être envisagées. L’ingénierie de la mise en production moderne a développé rapidement, tandis que de nombreux nouvelles techniques et outils ont émergé pour le soutenir de différentes perspectives. Cependant, le manque de techniques pour comprendre le progrès des phases de la mise en production ou d’évaluer la performance d’outils de la mise en production rend le travail difficile pour les praticiens qui ont à maintenir la qualité de leur processus de mise en production. Dans cette thèse, nous avons mené une série d’études empiriques en utilisant des techniques de fouille des référentiels logiciels sur des données de larges projets à source libre, qui montrent que, malgré des défis, la technologie MSR peut aider les ingénieurs de la mise en production à mieux comprendre leur progrès et à évaluer le coût des outils et des activités de la mise en production. Nous sommes heureux de voir que notre travail a inspiré d’autres chercheurs pour analyser davantage le processus d’intégration, ainsi que la qualité du code IaC.---------- ABSTRACT Release engineering (Releng) is the process of delivering integrated work from developers as a complete product to end users. This process comprises the phases of Integration, Building and Testing, Deployment and Release to finally reach the market. While traditional software engineering takes several months to deliver a complete software product to customers, modern Release engineering aims to bring value to customer more quickly, receive useful feedback faster, and reduce time wasted on unsuccessful features in development process. A wealth of tools/techniques emerged to support Release engineering. They basically aim to automate phases in the Release engineering pipeline, reducing the manual labor, and making the procedure repeatable and fast. For example, Puppet is one of the most popular Infrastructure-as-Code (IaC) tools, which automates the process of setting up a new infrastructure (e.g., a virtual machine or a container in which an application can be compiled, tested and deployed) according to specifications. Infrastructure-as-Code has evolved rapidly due to the growth of cloud computing. However, many problems also come along. For example, while many Release engineering tools gain popularity, choosing the most suitable technique requires practitioners to empirically evaluate the performance of the technique in a realistic setting, with data mimicking their own setup. Even worse, at a higher level, release engineers need to understand the progress of each release engineering phase, in order to know whether the next release deadline can be met or where bottlenecks occur. Again, they have no clear methodology to do this. To help practitioners analyze their Release engineering process better, we explore the way of mining software repositories (MSR) on two critical phases of Releng of large open-source projects. Software repositories like version control systems, bug repositories or code reviewing environments, are used on a daily basis by developers, testers and reviewers to record information about the development process, such as code changes, bug reports or code reviews. By analyzing the data, one can recreate the process of how software is built and analyze how each phase of Releng applies in this project. Many repositories of open-source software projects are available publicly, which offers opportunities for empirical research of Release engineering. Therefore, we conduct a series of empirical studies of mining software repositories of popular open-source software projects, to understand the progress of Release engineering and evaluate the performance of state-of-the-art tools. We mainly focus on two phases: Integration and Provisioning (Infrastructure-as-Code), because these two phases are most critical in Release engineering and ample quantity data is available. In our empirical study of the Integration process, we evaluate how well MSR techniques based on version control and review data explain the major factors impacting the probability and time taken for a patch to be successfully integrated into an upcoming release. We selected the Linux kernel, one of the most popular OSS projects having a long history and a strict integration hierarchy, as our case study. We collected data from reviewing and integration tools of the Linux kernel (mailing lists and Git respectively), and extracted characteristics covering six dimensions. Then, we built models with acceptance/time as output and analyzed which characteristics have impact on the reviewing and integration processes. We found that reviewing and integration are impacted by different factors. Our findings suggest that developers manage to get their patch go through review phase faster by changing less subsystems at a time and splitting a large patch into multiple smaller patches. Also, developers can make patches accepted more easily and sooner by participating more in the community and gaining more experience in similar patches. In this study on the Linux kernel, we found that one major challenge of MSR is to link different repositories. For example, the Linux kernel and Apache project both use mailing lists to perform the reviewing process. Contributors submit and maintainers review patches all by emails, where usually an email thread is used to collect all different versions of a patch. However, often a new email thread, with different subject, is being used to submit a new patch revision, which means that no strong physical links between all patch revisions exist. On top of that, the accepted patch also does not have a physical link to the resulting commit in the version control system, unless a commit identifier is posted in an email. Especially when a patch has been revised multiple times and evolved a lot from the original version, tracking its very first version is difficult. We proposed three approaches of different granularity and strictness, aiming to recover the physical links between emails in the same thread. In the study, we found that a line-based technique works best to link emails between threads while the combination of line-based and checksum-based technique achieves the best performance for linking emails in a thread with the final, accepted commit. Being able to reconstruct the full history of a patch allowed us to analyze the performance of the reviewing phase. We found that 25% of commits have a reviewing history longer than four weeks. To evaluate the ability of MSR to analyze the performance of Releng tools, we evaluated a hybrid integration approach, which combines branching and toggling techniques together, in a commercial project. Branching allows developers to work on different branches in parallel, while toggling enables developers on the same branch on different tasks. Instead of revising their whole integration process to drop branching and move to toggling, hybrid toggling is a compromise that tries to minimize the risks of branching while enjoy the benefits of toggling. We compared the performance before and after adopting hybrid toggling, and found that this hybrid structure can reduce integration effort and improve productivity. Hence, hybrid toggling seems a worthwhile practice. In the Provisioning phase, we focus on evaluating the usage and effort of the popular tools used in modern Release engineering: Infrastructure-as-Code (IaC). We empirically studied IaC tools in OpenStack and MediaWiki, which have a huge code base and adopt two currently most popular IaC languages: Puppet and Chef. First, we study maintenance effort related to the regular development and testing process of OpenStack, then compare this to IaC-related effort in both case studies. We found that IaC code takes a large proportion in both projects and it changes frequently, with large churn size. Meanwhile, IaC code changes are tightly coupled with source code changes, which implies that changes to source or test code require accompanying changes to IaC, which might lead to increased complexity and maintenance effort. Furthermore, the most common reason for such coupling is “Integration of new modules or service”. However, we also observed IaC code has light coupling with IaC test cases and test data, which are new kinds of artifacts in IaC domain. Hence, IaC may take more effort than engineers expect and further empirical studies should be considered. Modern Release engineering has developed rapidly, while many new techniques/tools emerge to support it from different perspectives. However, lack of knowledge of the current Release engineering progress and performance of these techniques makes it difficult for practitioners to sustain high quality Releng approach in practice. In this thesis, we conducted a series of empirical studies of mining software repositories of large open-source projects, that show that, despite some challenges, MSR technology can help release engineers understand better the progress of and evaluate the cost of Release engineering tools and activities. We are glad to see our work has inspired other researchers to further analyze the integration process as well as the quality of IaC code

    MISeval: a metric library for medical image segmentation evaluation

    Get PDF
    Correct performance assessment is crucial for evaluating modern artificial intelligence algorithms in medicine like deep-learning based medical image segmentation models. However, there is no universal metric library in Python for standardized and reproducible evaluation. Thus, we propose our open-source publicly available Python package MISeval: a metric library for Medical Image Segmentation Evaluation. The implemented metrics can be intuitively used and easily integrated into any performance assessment pipeline. The package utilizes modern CI/CD strategies to ensure functionality and stability. MISeval is available from PyPI (miseval) and GitHub: https://github.com/frankkramer-lab/miseval

    Hot Patching Hot Fixes: Reflection and Perspectives

    Get PDF
    With our reliance on software continuously increasing, it is of utmost importance that it be reliable. However,complete prevention of bugs in live systems is unfortunately an impossible task due to time constraints, incomplete testing, and developers not having knowledge of the full stack. As a result, mitigating risks for systems in production through hot patching and hot fixing has become an integral part of software development. In this paper, we first give an overview of the terminology used in the literature for research on this topic. Subsequently, we build upon these findings and present our vision for an automated framework for predicting and mitigating critical software issues at runtime. Our framework combines hot patching and hot fixing research from multiple fields, in particular: software defect and vulnerability prediction, automated test generation and repair, as well as runtime patching. We hope that our vision inspires research collaboration between the different communities

    Дослідження необхідності впровадження технологій DevOps у навчання майбутніх вчителів інформатики

    Get PDF
    The article examines the problem of implementing DevOps technologies in the training of future Computer Science teachers. This problem has arisen due to the development and expansion of digital technologies, as well as increased stakeholder requirements for future Computer Science teachers. The current state of DevOps technologies and their impact on the process of informatisation and digitalisation of society were studied using scientific methods of analysis and systematisation of scientific publications. The professional community of IT specialists actively implements and popularizes DevOps technologies, and the analysis of publications showed that there are almost no educational programs available for the study of DevOps. Although educational programs in the specialty "Secondary Education (Informatics)" were separately noted, the content of these programs do not usually involve the study of DevOps elements. However, modern directions for improving the content of the school Computer Science course involve improving its practical orientation, and DevOps technologies can help in this regard. The research identified some substantive components of DevOps technologies that can be implemented in the training of informatics teachers, namely: infrastructure as code, configuration management, containers, container management, infrastructure security, deployment pipeline, the architecture of microservices, post-production, and domain-specific DevOps features. It is important to note that the learning of DevOps technologies by future Computer Science teachers should be based on the needs of stakeholders. Informatics teachers do not need to master all technical and technological aspects of implementing and using DevOps technologies, but the necessary level of professional competencies must be formed for successful employment. The results of the conducted ascertainment experiment confirmed the necessity of studying DevOps technologies for future Computer Science teachers. Stakeholders chose the most relevant DevOps technologies for a modern Computer Science teacher, such of the infrastructure as code, containers, and container management.У статті досліджено проблему впровадження технологій DevOps у навчання майбутніх вчителів інформатики, що виникла у зв’язку з розвитком та розширенням можливостей цифрових технологій та підвищення вимог стейкголдерів до майбутніх вчителів інформатики. З використанням наукових методів аналізу та систематизації наукових публікацій вивчено сучасний стан розвитку технологій DevOps, та їх вплив на процес інформатизації та цифровізації суспільства. Визначено до професійна спільнота IT-фахівців активно впроваджує та популяризує технології DevOps. Проведений аналіз публікацій засвідчив, що сьогодні майже відсутні освітньо-професійні програми, які передбачають вивчення DevOps. Окремо відзначені освітньо-професійні програми зі спеціальності «Середня освіта (Інформатика)». У переважній більшості зміст цих програм не передбачає вивчення елементів DevOps. Сучасні напрямки удосконалення змісту шкільного курсу інформатики передбачають поліпшення його практико-орієнтованості, а технології DevOps зможуть допомогти в цьому. У дослідженні визначені ряд змістових компонентів технологій DevOps які можуть бути впроваджені в підготовку вчителів інформатики, а саме: інфраструктура як код; управління конфігурацією; контейнери; управління контейнерами; безпека інфраструктури; конвеєр розгортання; архітектура мікросервісів; постпродакшн; особливості DevOps для конкретного домену. Вивчення елементів технологій DevOps майбутніми вчителями інформатики має ґрунтуватися на потребах стейкголдерів. Вчителі інформатики не мають володіти всіма технічними та технологічними аспектами впровадження та використання технологій DevOps, але необхідний рівень професійних компетентностей має бути сформований для подальшого успішного працевлаштування. Результати проведеного констатувального експерименту дозволили підтвердити необхідність вивчення технологій DevOps майбутніми вчителями інформатики. Стейкголдери також обрали найбільш актуальні для сучасного вчителя інформатики технології DevOps: інфраструктура як код, контейнери, управління контейнерами

    Evolution of Integration, Build, Test, and Release Engineering Into DevOps and to DevSecOps

    Get PDF
    Software engineering operations in large organizations are primarily comprised of integrating code from multiple branches, building, testing the build, and releasing it. Agile and related methodologies accelerated the software development activities. Realizing the importance of the development and operations teams working closely with each other, the set of practices that automated the engineering processes of software development evolved into DevOps, signifying the close collaboration of both development and operations teams. With the advent of cloud computing and the opening up of firewalls, the security aspects of software started moving into the applications leading to DevSecOps. This chapter traces the journey of the software engineering operations over the last two to three decades, highlighting the tools and techniques used in the process

    Lineamientos para la implementación del modelo CALMS de DevOps en mipymes desarrolladoras de software en el contexto surcolombiano

    Get PDF
    The term DevOps appears at 2009, it is a new paradigm, in which the developing process and deployment are integrated. Currently enterprises such as Etsy, Facebook, Amazon or Netflix are leaders in the implementation of DevOps paradigm. In this work, we present a guideline to implement the CALMS model (DevOps) at mipymes in the software development organizations of the South of Colombia. We present the technical and organizational aspects of guidelines; it integrates a newfangled development environment to develop of DevOps in the Colombian context. The purposed model was tested at mipymes from Colombia and the results are promising. The mipymes did a daily commit, when without of the guidelines they did a commit per week, in addition, 16 deployments were successful and upload to production, front 20 total deployments. Finally, we discuss how the implementation of the guidelines in an environment tested and integrated can be more productive organization, as well as how they can be competitive software companies in a globalized market.DevOps es un término relativamente nuevo que apareció por primera vez en 2009. Sin embargo, empresas como Etsy, Facebook, Amazon o Netflix son líderes en la implementación del nuevo paradigma. En este trabajo se presenta un conjunto de lineamientos para la implementación del modelo CALMS (DevOps) en mipymes de desarrollo de software en el contexto surcolombiano, así como un modelo que tiene en cuenta los aspectos técnicos y organizacio-nales para lograr un ambiente de desarrollo novedoso que permita la integración de DevOps al contexto del desarrollo de software colombiano. El modelo fue probado en una empresa mipyme y los resultados son alentadores. Se pasó de hacer un despliegue semanal a un despliegue diario, y de los 20 despliegues que se hicieron en total, 16 fueron puestos exitosamente en producción. Finalmente, discutimos cómo DevOps puede incrementar la productividad de las organizaciones de desarrollo de software y cómo la implementación de los lineamientos en un ambiente integrado y probado puede incrementar la competitividad de las empresas de software en un mercado globalizado

    An effort allocation method to optimal code sanitization for quality-aware energy efficiency improvement

    Get PDF
    Abstract-Software energy efficiency has been shown to remarkably affect the energy consumption of IT platforms. Besides "performance" of the code in efficiently accomplishing a task, its "correctness" matters too. Software containing defects is likely to fail and the computational cost to complete an operation becomes much higher if the user encounters a failure. Both performancerelated energy efficiency of software and its defectiveness are impacted by the quality of the code. Exploiting the relation between code quality and energy/defectiveness attributes is the main idea behind this position paper. Starting from the authors' previous experience in this field, we define a method to first predict the applications of a software system more likely to impact energy consumption and with higher residual defectiveness, and then to exploit the prediction for optimally scheduling the effort for code sanitization -thus supporting, by quantitative figures, the quality assurance teams' decision-makers

    SecDocker: Hardening the Continuous Integration Workflow

    Get PDF
    1-13 p.Current Continuous Integration (CI) processes face significant intrinsic cybersecurity challenges. The idea is not only to solve and test formal or regulatory security requirements of source code but also to adhere to the same principles to the CI pipeline itself. This paper presents an overview of current security issues in CI workflow. It designs, develops, and deploys a new tool for the secure deployment of a container-based CI pipeline flow without slowing down release cycles. The tool, called SecDocker for its Docker-based approach, is publicly available in GitHub. It implements a transparent application f irewall based on a configuration mechanism avoiding issues in the CI workflow associated with intended or unintended container configurations. Integrated with other DevOps Engineers tools, it provides feedback from only those scenarios that match specific patterns, addressing future container security issues.SIhttps://link.springer.com/article/10.1007/s42979-021-00939-

    Team management strategies for DevOps

    Get PDF
    In an increasingly digital market, and where the time to market is shorter and the quality and reliability more relevant, it is imperative that software development teams can organize themselves in order to provide a faster reaction to the market with more reliability. DevOps intends to eliminate the existence of silos (Development and Operations) and streamline the software production, declining waste and difficulties in its construction, increasing productivity and developing better products with a focus on client satisfaction. Nevertheless, the joining of teams around the same goal causes key managing challenges, namely the management of conflicts and information sharing between teams. The way that these challenges are managed can interfere with the successful implementation of DevOps philosophy. Though a Case Study, the research goal is to study the best team management strategies that help to reduce the appearance of conflicts and enhance information sharing in the context of DevOps implementation, increasing effectiveness in those teams. As a result, this research brings some strategies to facilitate the DevOps team management and reinforces the importance of managing conflicts, processes, tasks and information well.Num mercado cada vez mais digital e onde o tempo de mercado é cada vez mais curto, a qualidade e fiabilidade mais relevantes, é imperativo que as equipas de desenvolvimento de software consigam organizar-se de modo a proporcionar uma resposta rápida no mercado e cada vez mais fiável. A filosofia DevOps pretende terminar com a existência de silos (Desenvolvimento e Operações) e agilizar a produção de software, diminuindo desperdício e dificuldades na sua construção, aumentando a produtividade e desenvolver produtos melhores com foco na satisfação do cliente. Contudo, a junção de equipas em torno de um mesmo objetivo acarreta desafios cruciais para a gestão, nomeadamente a gestão de conflitos e da informação entre as equipas. A forma como estes desafios são geridos poderá interferir no sucesso da implementação de uma filosofia DevOps. Através de um Caso de Estudo, o objetivo desta pesquisa é o levantamento das melhores estratégias de gestão de equipas que ajudem a reduzir o surgimento de conflitos e potenciar a partilha de informação em contexto de implementação da filosofia DevOps, aumentando a eficácia destas equipas. Como resultado, esta pesquisa traz algumas estratégias que podem facilitar a gestão de equipas DevOps e reforça a importância de fazer uma boa gestão dos conflitos, tarefas, processos e da informação
    corecore