20 research outputs found

    Identificación de factores clave de éxito para evitar las pruebas automatizadas no determinísticas en servicios REST

    Get PDF
    A flaky test is a test which could fail or pass for the same version of a certain software code. In continuous software development environments, flaky tests represent a problem. It is difficult to get an effective and reliable testing pipeline with a set of flaky tests. Also, according to many practitioners, despite the persistence of flaky tests in software development, they have not drawn much attention from the research community. In this paper, we describe how a company faced this issue, and implemented solutions to solve flaky tests for REST web services. The paper concludes proposing a set of key success factors for stopping flaky tests in this type of testing.Una prueba no determinística es una prueba que podría fallar o ser exitosa con la misma versión de un determinado código de software. En entornos de desarrollo de software continuo, las pruebas no determinísticas representan un problema. Es difícil obtener un proceso de pruebas efectivo y confiable con pruebas no determinísticas. Además, de acuerdo con muchos profesionales, a pesar de la persistencia de este tipo de pruebas, las mismas no han llamado mucho la atención de la comunidad científica. En este trabajo, describimos cómo una empresa se ha enfrentado este problema e implementado soluciones para resolver pruebas no determinísticas en servicios REST. Al final, se proponen un conjunto de factores clave de éxito para evitar este problema en pruebas de servicios.Facultad de Informátic

    Studying the impact of CI on pull request delivery time in open source projects - a conceptual replication

    Get PDF
    Nowadays, continuous integration (CI) is indispensable in the software development process. A central promise of adopting CI is that new features or bug fixes can be delivered more quickly. A recent repository mining study by Bernardo, da\ua0Costa & Kulesza (2018) found that only about half of the investigated open source projects actually deliver pull requests (PR) faster after adopting CI, with small effect sizes. However, there are some concerns regarding the methodology used by Bernardo et al., which may potentially limit the trustworthiness of this finding. Particularly, they do not explicitly control for normal changes in the pull request delivery time during a project’s lifetime (independently of CI introduction). Hence, in our work, we conduct a conceptual replication of this study. In a first step, we replicate their study results using the same subjects and methodology. In a second step, we address the same core research question using an adapted methodology. We use a different statistical method (regression discontinuity design, RDD) that is more robust towards the confounding factor of projects potentially getting faster in delivering PRs over time naturally, and we introduce a control group of comparable projects that never applied CI. Finally, we also evaluate the generalizability of the original findings on a set of new open source projects sampled using the same methodology. We find that the results of the study by Bernardo et al. largely hold in our replication. Using RDD, we do not find robust evidence of projects getting faster at delivering PRs without CI, and we similarly do not see a speed-up in our control group that never introduced CI. Further, results obtained from a newly mined set of projects are comparable to the original findings. In conclusion, we consider the replication successful

    Atualização do Processo de Entrega de Portais

    Get PDF
    The integration and deployment of an application is a regular activity in many organizations, creating value for a client and helping developers validate their work. This process, once completely manual, became automated nowadays, and it is possible to build, test and deploy an application without any human interaction. Although some organizations might still study the pros and cons, the truth is that Continuous Integration, Deployment and Delivery are a part of many organizations, helping quickly discover errors or deliver a correction or version without disrupting the work of its users. Grasshopper SI, an organization based in Maia with 17 years, did not have these processes implemented in any of its projects. To evaluate the impact of these practices in the organization, different approaches were studied and then applied the best approach to a single project, while searching for a way to reduce the downtime between updates. This dissertation describes the whole process performed, from research to evaluation, while also evaluating the benefits and disadvantages of applying these techniques.A integração e disponibilização de uma aplicação é um processo regular em várias organizações, ajudando as mesmas a criar valor para clientes e ajudando os desenvolvedores a validar o seu trabalho. Outrora, o processo era completamente manual, dependendo sempre de uma equipa dedicada ou dos próprios desenvolvedores; contudo, hoje em dia existem soluções de automatização, permitindo construir, testar e entregar uma aplicação sem nenhuma interação humana. Apesar de algumas organizações estudarem ainda as vantagens e desvantagens dos processos, a verdade é que os processos de Integração, Disponibilização e Entrega Contínua são uma parte de várias organizações, ajudando-as a descobrir rapidamente erros ou a entregar correções ou novas versões sem interromper o trabalho dos seus utilizadores. A Grasshopper SI, uma organização sediada na Maia com 17 anos, não possuía nenhum destes processos em nenhum dos seus projetos. De modo a avaliar o impacto destas práticas na organização, foram estudadas diferentes técnicas e abordagens, sendo que as melhores práticas foram aplicadas a um projeto existente na mesma. Em paralelo, foi estudada a possibilidade de reduzir o tempo de indisponibilidade durante uma atualização da aplicação. Esta dissertação descreve todo o processo realizado, desde pesquisa até avaliação, enquanto era realizada a avaliação das vantagens e desvantagens das técnicas aplicadas

    The Impact of Code Ownership of DevOps Artefacts on the Outcome of DevOps CI Builds

    Get PDF
    This study focuses on factors that may influence the outcomes of CI builds triggered by commits modifying and/or adding DevOps artefacts to the projects, i.e., DevOps-related CI builds. In particular, code ownership of DevOps artefacts is one such factor that could impact DevOps-related CI builds. There are two main strategies as suggested in prior work: (1) all project developers need to contribute to DevOps artefacts, and (2) a dedicated group of developers needs to be authoring DevOps artefacts. To analyze which strategy works best for OSS projects, we conduct an empirical analysis on a dataset of 892,193 CircleCI builds spanning 1,689 Open-Source Software projects. First, we investigate the impact of code ownership of DevOps artefacts on the outcome of a CI build on a build level. Second, we study the impact of the Skewness of DevOps contributions on the success rate of CI builds at the project level. Our findings reveal that, in general, larger code ownership and higher Skewness values of DevOps contributions \shane{are related} to more successful build outcomes and higher rates of successful build outcomes, respectively. However, we also find that projects with low skewness values could have high build success rates if the number of developers in the project is relatively small. Thus, our results suggest that while larger software organizations are better off having dedicated DevOps developers, smaller organizations would benefit from having all developers involved in DevOps

    Continuous Integration and Automated Code Review in Open Source Projects

    Get PDF
    Kvůli zvýšení popularity projektů s otevřeným zdrojovým kódem se adaptovala nová softwarová metodologie, která se stále vyvíjí. Tato bakalářská práce se zabývá touto adaptovanou agilní softwarovou metodologií, přesněji její průběžnou integrací a vylepšením ve skutečném praktickém nasazení. Kromě toho se práce zabývá také automatizací procesu kontroly kódu zejména jeho statickou analýzou. Cílem práce je popsat a vysvětlit, jak průběžná integrace a automatizovaná kontrola kódu ovlivňují a zlepšují moderní projekty s otevřeným zdrojovým kódem. Vzhledem k výzkumu byl navrhnut a integrován moderní typ kódové analýzy s dalšími vylepšeními.Due to an increase of the open source projects popularity a new software methodology has been adapted which is still evolving with the time. This bachelor's thesis deals with this adapted agile software methodology more precisely with continuous integration and its improvements in a real practical deployment. Furthermore, the thesis also deals with automation of the code review process especially with the static code analysis. This thesis aims to describe and explain how the continuous integration and automated code review affect and enhance the modern open source projects. According to the research, a modern type of code analysis with other enhancements was proposed and integrated.

    Jatkuvien menetelmien parantaminen järjestelmäpiiri kehityksessä

    Get PDF
    This work is about continuous practices in embedded System-on-Chip development. Continuous practices include continuous integration, continuous delivery, and continuous development. These practices mean committing small code changes often to the repository’s main branch. Then the changes are automatically tested and integrated with the rest of the system. In the case of continuous deployment, all the changes are automatically deployed to production without any human interaction. Continuous practices are meant to make development faster and more effective, give feedback faster and improve quality by reducing bugs. These tasks are important in today’s industry which is continuously changing, and customer satisfaction is as important as ever. This creates the demand to deliver new products and updates rapidly along with high quality. In the Nokia Networks’ System-on-Chip department there was a need to increase the level of automated processes by improving the continuous practices. This work studies continuous practices based on literature and identifies ways to improve continuous practice processes used at System-on-Chip development. The implementation of this work was done at the System-on-Chip departments’ software unit where the current state was analysed. The main improvement points found in the analysis were related to automated function, investment and working habits. Based on the analysis the implementation plan was formed. The implementation included adding more functions to the continuous integration server, improving feedback and making the results more visible. These were done by creating more Jenkins jobs and integrating Robot Framework to the testing. All the improvements were not possible to do within the time scope or without the support of the whole department. Therefore, those problem points were analysed, and detailed plans were formed to solve them in the near future

    Version control of an old database system

    Get PDF
    This thesis describes a database-centric legacy system. The system wasn’t originally under version control at all and while version control was introduced later, it was backup-minded in nature. Business logic of the systesm was largely coded in database procedures, functions and triggers. Deployments of the system were done by copying previously deployed databases and over time the system’s version history was blurred or entirely forgotten. Problems caused by this are identified and their harmfulness is described. The problems are significant and cause harm different ways depending on the point of view. Programmers find that their work becomes more difficult and less appealing when they are wasting time figuring out the state of different environments, end up doing extra work because of forgotten features and the state of the software code in general deteriorates. Project management will find specification and planning more difficult for the same reasons and businesswise all wasted time ends up costing money. After identifying the problems solutions are looked from the software engineering theory. Cases where similar problems have been solved are especially looked for. It can be said, that much less is written about version controlling databases than on version control in general. The biggest challenge when version controlling databases is extracting the business logic from the database into version controllable scripts. The key part in various solutions is using tools to make this part easier. Methods that would make database version control as easy as version control in general cannot be found, though. Next a set of actions is introduced to improve the state of the described system. The actions aim at helping to avoid overlapping work, make the version history more clear and improve level of the software code in general. Workload of the actions is estimated using the planning poker method and the most cost effective ones are selected to be implemented. Implementation and effectiveness of these actions are evaluated. Some of the actions were implemented differently than planned, some weren’t implemented at all and some non-planned actions were implemented. It could be acknowledged that overlapping development work could be prevented and version history became clearer, but maintaining the process created more workload overhead. Next changes to this process and organization in general are suggested. Finally, it is pointed out that not too much time should be spent fixing a legacy system and the root cause of problems can be avoided altogether by allocating more resources to the development.Työssä esitellään tietokantaa normaalia enemmän hyödyntävä legacy-järjestelmä. Järjestelmää kehittäessä ei ole käytetty versiohallintaa ollenkaan, ja myöhemminkin vain varmuuskopionäkökulmasta. Järjestelmän ohjelmistologiikka on suurelta osin toteutettu tietokantojen proseduureissa, funktioissa ja liipaisimissa. Järjestelmästä on tehty julkaisuja asiakastoimituksia varten kopioimalla vanhoja toimituksia varten tehtyjä julkaisuja ja ajan kuluessa järjestelmän versiohistoria on hämärtynyt tai unohtunut kokonaan. Työssä pyritään tunnistamaan tästä aiheutuvia ongelmia ja perustelemaan, miksi ne ovat haitallisia. Ongelmat ovat merkittäviä ja aiheuttavat haittaa eri tavalla näkökulmasta riippuen. Ohjelmistokehittäjien työ vaikeutuu ja muuttuu epämiellyttävämmäksi, kun eri versioiden tilan selvittämiseen täytyy käyttää aikaa. Jo tehdyn kehitystyön unohtuminen aiheuttaa päällekkäisen työn tekemistä ja ylipäätään järjestelmän ohjelmistotekninen laatu laskee. Projektinjohtonäkökulmasta työn määrittely ja suunnittelu vaikeutuvat samoista syistä, sekä liiketoimintanäkökulmasta kaikkeen päällekkäiseen ja ylimääräiseen työhön kuluva aika maksaa rahaa. Ongelmien tunnistamisen jälkeen etsitään ratkaisukeinoja ohjelmistotieteen kirjallisuudesta. Erityisesti paneudutaan tapauksiin, joissa on ratkaistu vastaavia ongelmia. Voidaan todeta, että aiheesta on kirjoitettu vähän verrattuna versiohallintaan yleisesti. Suurin haaste tietokantojen versiohallinnoinnissa on ohjelmistologiikan muuttaminen versiohallintaa tukeviksi tiedostoiksi. Ratkaisujen oleellisin osuus onkin työkalujen käyttö tämän vaiheen helpottamiseksi. Keinoja, jotka tekisivät tietokantojen versionhallinnoinnista yhtä helppoa kuin tyypillisen ohjelmistokoodin versiohallinnoinnista ei löytynyt. Seuraavaksi esitellään joukko toimenpiteitä esitellyn järjestelmän tilan parantamiseksi. Toimenpiteet keskittyvät päällekkäisen työn välttämiseen, versiohistorian selventämiseen ja ohjelmistokoodin tason parantamiseen. Toimenpiteiden työmäärä arvioidaan planning poker -menetelmällä ja niistä valitaan toteutettavaksi kustannustehokkaimmat. Toimenpiteiden toteutuksen jälkeen niiden toteutusta ja tehoa arvioidaan. Osa niistä toteutettiin eri tavalla kuin oli suunniteltu, joitain jäi toteuttamatta ja lisäksi tehtiin joitain suunnittelemattomia toimenpiteitä. Voidaan todeta, että päällekkäisen kehitystyön tekeminen ja versiohistorian hämärtyminen saatiin estettyä, mutta itse prosessin ylläpitämisestä tuli työläämpää. Jatkoehdotuksissa ehdotetaan korjauksia tähän prosessiin ja organisaatioon yleensä. Lopuksi todetaan, että vanhaa järjestelmää ei kannata korjata liian työläästi ja ongelmiin päätyminen voidaan alkujaan välttää poistamalla juurisyynä toiminut resurssipula
    corecore