5 research outputs found

    Predicting Test Case Verdicts Using TextualAnalysis of Commited Code Churns

    Get PDF
    Background: Continuous Integration (CI) is an agile software development practice that involves producing several clean builds of the software per day. The creation of these builds involve running excessive executions of automated tests, which is hampered by high hardware cost and reduced development velocity. Goal: The goal of our research is to develop a method that reduces the number of executed test cases at each CI cycle.Method: We adopt a design research approach with an infrastructure provider company to develop a method that exploits Ma-chine Learning (ML) to predict test case verdicts for committed sourcecode. We train five different ML models on two data sets and evaluate their performance using two simple retrieval measures: precision and recall. Results: While the results from training the ML models on the first data-set of test executions revealed low performance, the curated data-set for training showed an improvement on performance with respect to precision and recall. Conclusion: Our results indicate that the method is applicable when training the ML model on churns of small size

    Supporting Continuous Integration by Code-Churn Based Test Selection

    No full text
    Continuous integration promises advantages in large-scale software development by enabling software development organizations to deliver new functions faster. However, implementing continuous integration in large software development organizations is challenging because of organizational, social and technical reasons. One of the technical challenges is the ability to rapidly prioritize the test cases which can be executed quickly and trigger the most failures as early as possible. In our research we propose and evaluate a method for selecting a suitable set of functional regression tests on system level. The method is based on analysis of correlations between test-case failures and source code changes and is evaluated by combining semi-structured interviews and workshops with practitioners at Ericsson and Axis Communications in Sweden. The results show that using measures of precision and recall, the test cases can be prioritized. The prioritization leads to finding an optimal test suite to execute before the integration

    Testien valinta ja priorisointi jatkuvassa integraatiossa

    Get PDF
    It is beneficial for continuous integration (CI), that building and testing a software happens as quickly as possible. Sometimes, when a test suite grows large during the lifecycle of the software, testing becomes slow and inefficient. It is a good idea to parallelize test executions to speed up testing, but in addition to that, test case selection and prioritization can be used. In this case study, we use incremental machine learning techniques to predict failing and passing tests in the test suite of existing software from the space industry and execute only test cases that are predicted failing. We apply such test case selection techniques to 35 source code modifying commits of the software and compare their performances to traditional coverage based selection techniques and other heuristics. Secondly, we apply different incremental machine learning techniques in test case prioritization and compare their performances to traditional coverage based prioritization techniques. We combine features that have been used successfully in previous studies, such as code coverage, test history, test durations and text similarity to separate passing and failing tests with machine learning. The results suggest, that certain test case selection and prioritization techniques can enhance testing remarkably, providing significantly better results compared to random selection and prioritization. Additionally, incremental machine learning techniques require a learning period of approximately 20 source code modifying commits to produce equal or better results than the comparison techniques in test case selection. Test case prioritization techniques with incremental machine learning perform significantly better than the traditional coverage based techniques, and they can outweigh the traditional techniques in the weighted average of faults detected (APFD) values immediately after initial training. We show that machine learning does not need a rigorous amount of training to outperform traditional approaches in test case selection and prioritization. Therefore, incremental machine learning suits test case selection and prioritization well, when initial training data does not exist.Jatkuvan integraation toimivuuden edellytyksenä on, että ohjelmiston kääntäminen ja testaaminen tapahtuu mahdollisimman nopeasti. Ohjelmiston kehitystyön edetessä automaattisesti ajettavien testien määrä voi kasvaa suureksi. Tällöin on olemassa riski, että testaaminen hidastuu ja jatkuva integraatio kärsii sen seurauksena. Testejä voidaan nopeuttaa esimerkiksi rinnakkaistamalla testiajoja, mutta sen lisäksi testejä voidaan myös priorisoida tai testeistä voidaan valita vain pieni määrä ajettaviksi. Tässä tapaustutkimuksessa tutkimme testien valintaa ja priorisointia koneoppimisen avulla. Valitsemme ajettaviksi ainoastaan sellaiset testit, jotka koneoppimismallit ennustivat hajoaviksi. Koneoppimismallit päättelevät testien lopputulemia eri tietolähteitä yhdistelemällä. Näitä tietolähteitä ovat mm. koodikattavuus, testien ajohistoria, testien kestoaika ja testien ja koodimuutosten samankaltaisuus. Käytämme tällaista testien valintaa aineistoon, joka on kerätty avaruusteollisuuden ohjelmistoprojektista. Vertaamme koneoppimisen avulla saatuja tuloksia perinteisiin testien valintamenetelmiin ja heuristiikkoihin. Tapaustutkimuksessa vertailemme myös koneoppimisen avulla suoritettua testien priorisointia perinteisiin koodikattavuuspohjaisiin priorisointimenetelmiin. Tutkimuksen tulokset osoittavat, että tietyt testien valinta- ja priorisointimenetelmät tehostavat testaamista huomattavasti ja tuottavat merkittävästi parempia tuloksia kuin satunnaisuuteen perustuvat menetelmät. Tämän lisäksi tulokset osoittavat, että testien valinnassa koneoppimismenetelmät saavuttavat samankaltaisen tai paremman tuloksen kuin paras heuristiikka noin kahdenkymmenen koodimuutoksen jälkeen. Testien priorisoinnissa koneoppimismenetelmät tuottavat merkittävästi parempia tuloksia kuin vertailumenetelmät. Tutkimuksen tulokset osoittavat, että koneoppimismenetelmät eivät välttämättä tarvitse suuria määriä koulutusdataa, vaan voivat ennustaa pienelläkin määrällä koulutusdataa testien lopputulemia paremmin kuin vertailumenetelmät

    Modelo de mejora para pruebas continuas

    Get PDF
    La Entrega Continua es una práctica donde se desarrolla software de calidad de un modo en el que puede ser lanzado a producción en cualquier momento. Sin embargo, como parte de este trabajo de investigación se realizaron una revisión sistemática de la literatura y una encuesta, las cuales reportan que tanto la literatura académica como la industria todavía encuentran problemas relacionados con el proceso de pruebas al usar prácticas como Entrega Continua o Despliegue Continuo. De este modo, se propone el Modelo de Mejora para Pruebas Continuas como una solución a los problemas de pruebas en los entornos de desarrollo continuo. El mismo recopila propuestas y enfoques de diferentes autores que son presentados como buenas prácticas, agrupadas por tipos de pruebas y divididos en cuatro niveles. Estos niveles indican una jerarquía de mejora y un camino evolutivo en la implementación de las Pruebas Continuas. Además, una herramienta llamada EvalCTIM fue desarrollada para guiar la evaluación del proceso de pruebas utilizando el modelo propuesto. Finalmente, para validar el modelo, se empleó el método de Investigación-Acción mediante una evaluación teórica interpretativa seguida de estudios de casos llevados a cabo en proyectos de desarrollo de software reales. Los resultados demuestran que el modelo se puede utilizar como una solución para implementar las Pruebas Continuas de forma gradual en empresas con entornos de desarrollo continuo.Continuous Delivery is a practice where high-quality software is built in a way that it can be released into production at any time. However, a systematic literature review and a survey performed as part of this research report that both the literature and the industry are still facing problems related to testing using practices like Continuous Delivery or Continuous Development. Thus, we propose Continuous Testing Improvement Model as a solution to the testing problems in continuous software development environments. It brings together proposals and approaches from different authors which are presented as good practices grouped by type of tests and divided into four levels. These levels indicate an improvement hierarchy and an evolutionary path in the implementation of Continuous Testing. Also, an application called EvalCTIM was developed to support the appraisal of a testing process using the proposed model. Finally, to validate the model, an action-research methodology was employed through an interpretive theoretical evaluation followed by case studies conducted in real software development projects. The results demonstrate that the model can be used as a solution for implementing Continuous Testing gradually at companies using continuous software development practices.Facultad de Informátic
    corecore