45 research outputs found

    Hatékony rendszer-szintű hatásanalízis módszerek és alkalmazásuk a szoftverfejlesztés folyamatában = Efficient whole-system impact analysis methods with applications in software development

    Get PDF
    Szoftver hatásanalízis során a rendszer megváltoztatásának következményeit becsüljük, melynek fontos alkalmazásai vannak például a változtatás-propagálás, költségbecslés, szoftverminőség és tesztelés területén. A kutatás során olyan hatásanalízis módszereket dolgoztunk ki, melyek hatékonyan és sikeresen alkalmazhatók nagyméretű és heterogén architektúrájú, valós alkalmazások esetében is. A korábban rendelkezésre álló módszerek csak korlátozott méretben és környezetekben voltak képesek eredményt szolgáltatni. A meglévő statikus és dinamikus programszeletelés és függőség elemzési algoritmusok továbbfejlesztése mellett számos kapcsolódó területen értünk el eredményeket úgy, mint függőségek metrikákkal történő vizsgálata, fogalmi csatolás kutatása, minőségi modellek, hiba- és produktivitás előrejelzés. Ezen területeknek a módszerek gyakorlatban történő alkalmazásában van jelentősége. Speciális technológiákra koncentrálva újszerű eredmények születtek, például adatbázis rendszerek vagy alacsony szintű nyelvek esetében. A hatásanalízis módszerek alkalmazásai terén kidolgoztunk újszerű módszereket a tesztelés optimalizálása, teszt lefedettség mérés, -priorizálás és változás propagálás területeken. A kidolgozott módszerek alapját képezték további projekteknek, melyek során szoftvertermékeket is kiegészítettek módszereink alapján. | During software change impact analysis, we assess the consequences of changes made to a software system, which has important applications in, for instance, change propagation, cost estimation, software quality and testing. We developed impact analysis methods that can be effectively and efficiently used for large and heterogeneous real life applications as well. Previously available methods could provide results only in limited environments and for systems of limited size. Apart from the enhancements developed for the existing static and dynamic slicing and dependence analysis algorithms, we achieved results in different related areas such as investigation of dependences based on metrics, conceptual coupling, quality models and prediction of defects and productivity. These areas mostly support the application of the methods in practice. We have contributions in the fields of different special technologies, for instance, dependences in database systems or analysis of low level languages. Regarding the applications of impact analysis, we developed novel methods for test optimization, test coverage measurement and prioritization, and change propagation. The developed methods provided basis for further projects, also for extension of certain software products

    A review paper: optimal test cases for regression testing using artificial intelligent techniques

    Get PDF
    The goal of the testing process is to find errors and defects in the software being developed so that they can be fixed and corrected before they are delivered to the customer. Regression testing is an essential quality testing technique during the maintenance phase of the program as it is performed to ensure the integrity of the program after modifications have been made. With the development of the software, the test suite becomes too large to be fully implemented within the given test cost in terms of budget and time. Therefore, the cost of regression testing using different techniques should be reduced, here we dealt many methods such as retest all technique, regression test selection technique (RTS) and test case prioritization technique (TCP). The efficiency of these techniques is evaluated through the use of many metrics such as average percentage of fault detected (APFD), average percentage block coverage (APBC) and average percentage decision coverage (APDC). In this paper we dealt with these different techniques used in test case selection and test case prioritization and the metrics used to evaluate their efficiency by using different techniques of artificial intelligent and describe the best of all

    Efficient Regression Testing Based on Test History: An Industrial Evaluation

    Get PDF
    Due to changes in the development practices at Axis Communications, towards continuous integration, faster regression testing feedback is needed. The current automated regression test suite takes approximately seven hours to run which prevents developers from integrating code changes several times a day as preferred. Therefore we want to implement a highly selective yet accurate regression testing strategy. Traditional code coverage based techniques are not applicable due to the size and complexity of the software under test. Instead we decided to select tests based on regression test history. We developed a tool, the Difference Engine, which parses and analyzes results from previous test runs and outputs regression test recommendations. The Difference Engine correlates code and test cases at package level and recommends test cases that are strongly correlated to recently changed packages. We evaluated the technique with respect to correctness, precision, recall and efficiency. Our results are promising. On average the tool manages to identify 80% of the relevant tests while recommending only 4% of the test cases in the full regression test suite

    Testien valinta ja priorisointi jatkuvassa integraatiossa

    Get PDF
    It is beneficial for continuous integration (CI), that building and testing a software happens as quickly as possible. Sometimes, when a test suite grows large during the lifecycle of the software, testing becomes slow and inefficient. It is a good idea to parallelize test executions to speed up testing, but in addition to that, test case selection and prioritization can be used. In this case study, we use incremental machine learning techniques to predict failing and passing tests in the test suite of existing software from the space industry and execute only test cases that are predicted failing. We apply such test case selection techniques to 35 source code modifying commits of the software and compare their performances to traditional coverage based selection techniques and other heuristics. Secondly, we apply different incremental machine learning techniques in test case prioritization and compare their performances to traditional coverage based prioritization techniques. We combine features that have been used successfully in previous studies, such as code coverage, test history, test durations and text similarity to separate passing and failing tests with machine learning. The results suggest, that certain test case selection and prioritization techniques can enhance testing remarkably, providing significantly better results compared to random selection and prioritization. Additionally, incremental machine learning techniques require a learning period of approximately 20 source code modifying commits to produce equal or better results than the comparison techniques in test case selection. Test case prioritization techniques with incremental machine learning perform significantly better than the traditional coverage based techniques, and they can outweigh the traditional techniques in the weighted average of faults detected (APFD) values immediately after initial training. We show that machine learning does not need a rigorous amount of training to outperform traditional approaches in test case selection and prioritization. Therefore, incremental machine learning suits test case selection and prioritization well, when initial training data does not exist.Jatkuvan integraation toimivuuden edellytyksenä on, että ohjelmiston kääntäminen ja testaaminen tapahtuu mahdollisimman nopeasti. Ohjelmiston kehitystyön edetessä automaattisesti ajettavien testien määrä voi kasvaa suureksi. Tällöin on olemassa riski, että testaaminen hidastuu ja jatkuva integraatio kärsii sen seurauksena. Testejä voidaan nopeuttaa esimerkiksi rinnakkaistamalla testiajoja, mutta sen lisäksi testejä voidaan myös priorisoida tai testeistä voidaan valita vain pieni määrä ajettaviksi. Tässä tapaustutkimuksessa tutkimme testien valintaa ja priorisointia koneoppimisen avulla. Valitsemme ajettaviksi ainoastaan sellaiset testit, jotka koneoppimismallit ennustivat hajoaviksi. Koneoppimismallit päättelevät testien lopputulemia eri tietolähteitä yhdistelemällä. Näitä tietolähteitä ovat mm. koodikattavuus, testien ajohistoria, testien kestoaika ja testien ja koodimuutosten samankaltaisuus. Käytämme tällaista testien valintaa aineistoon, joka on kerätty avaruusteollisuuden ohjelmistoprojektista. Vertaamme koneoppimisen avulla saatuja tuloksia perinteisiin testien valintamenetelmiin ja heuristiikkoihin. Tapaustutkimuksessa vertailemme myös koneoppimisen avulla suoritettua testien priorisointia perinteisiin koodikattavuuspohjaisiin priorisointimenetelmiin. Tutkimuksen tulokset osoittavat, että tietyt testien valinta- ja priorisointimenetelmät tehostavat testaamista huomattavasti ja tuottavat merkittävästi parempia tuloksia kuin satunnaisuuteen perustuvat menetelmät. Tämän lisäksi tulokset osoittavat, että testien valinnassa koneoppimismenetelmät saavuttavat samankaltaisen tai paremman tuloksen kuin paras heuristiikka noin kahdenkymmenen koodimuutoksen jälkeen. Testien priorisoinnissa koneoppimismenetelmät tuottavat merkittävästi parempia tuloksia kuin vertailumenetelmät. Tutkimuksen tulokset osoittavat, että koneoppimismenetelmät eivät välttämättä tarvitse suuria määriä koulutusdataa, vaan voivat ennustaa pienelläkin määrällä koulutusdataa testien lopputulemia paremmin kuin vertailumenetelmät

    Supporting Development Decisions with Software Analytics

    Get PDF
    Software practitioners make technical and business decisions based on the understanding they have of their software systems. This understanding is grounded in their own experiences, but can be augmented by studying various kinds of development artifacts, including source code, bug reports, version control meta-data, test cases, usage logs, etc. Unfortunately, the information contained in these artifacts is typically not organized in the way that is immediately useful to developers’ everyday decision making needs. To handle the large volumes of data, many practitioners and researchers have turned to analytics — that is, the use of analysis, data, and systematic reasoning for making decisions. The thesis of this dissertation is that by employing software analytics to various development tasks and activities, we can provide software practitioners better insights into their processes, systems, products, and users, to help them make more informed data-driven decisions. While quantitative analytics can help project managers understand the big picture of their systems, plan for its future, and monitor trends, qualitative analytics can enable developers to perform their daily tasks and activities more quickly by helping them better manage high volumes of information. To support this thesis, we provide three different examples of employing software analytics. First, we show how analysis of real-world usage data can be used to assess user dynamic behaviour and adoption trends of a software system by revealing valuable information on how software systems are used in practice. Second, we have created a lifecycle model that synthesizes knowledge from software development artifacts, such as reported issues, source code, discussions, community contributions, etc. Lifecycle models capture the dynamic nature of how various development artifacts change over time in an annotated graphical form that can be easily understood and communicated. We demonstrate how lifecycle models can be generated and present industrial case studies where we apply these models to assess the code review process of three different projects. Third, we present a developer-centric approach to issue tracking that aims to reduce information overload and improve developers’ situational awareness. Our approach is motivated by a grounded theory study of developer interviews, which suggests that customized views of a project’s repositories that are tailored to developer-specific tasks can help developers better track their progress and understand the surrounding technical context of their working environments. We have created a model of the kinds of information elements that developers feel are essential in completing their daily tasks, and from this model we have developed a prototype tool organized around developer-specific customized dashboards. The results of these three studies show that software analytics can inform evidence-based decisions related to user adoption of a software project, code review processes, and improved developers’ awareness on their daily tasks and activities

    Token-Level Fuzzing

    Full text link
    Fuzzing has become a commonly used approach to identifying bugs in complex, real-world programs. However, interpreters are notoriously difficult to fuzz effectively, as they expect highly structured inputs, which are rarely produced by most fuzzing mutations. For this class of programs, grammar-based fuzzing has been shown to be effective. Tools based on this approach can find bugs in the code that is executed after parsing the interpreter inputs, by following language-specific rules when generating and mutating test cases. Unfortunately, grammar-based fuzzing is often unable to discover subtle bugs associated with the parsing and handling of the language syntax. Additionally, if the grammar provided to the fuzzer is incomplete, or does not match the implementation completely, the fuzzer will fail to exercise important parts of the available functionality. In this paper, we propose a new fuzzing technique, called Token-Level Fuzzing. Instead of applying mutations either at the byte level or at the grammar level, Token-Level Fuzzing applies mutations at the token level. Evolutionary fuzzers can leverage this technique to both generate inputs that are parsed successfully and generate inputs that do not conform strictly to the grammar. As a result, the proposed approach can find bugs that neither byte-level fuzzing nor grammar-based fuzzing can find. We evaluated Token-Level Fuzzing by modifying AFL and fuzzing four popular JavaScript engines, finding 29 previously unknown bugs, several of which could not be found with state-of-the-art byte-level and grammar-based fuzzers

    Rotten Green Tests A First Analysis

    Get PDF
    Unit tests are a tenant of agile programming methodologies, and are widely used to improve code quality and prevent code regression. A passing (green) test is usually taken as a robust sign that the code under test is valid. However, we have noticed that some green tests contain assertions that are never executed; these tests pass not because they assert properties that are true, but because they assert nothing at all. We call such tests Rotten Green Tests. Rotten Green Tests represent a worst case: they report that the code under test is valid, but in fact do nothing to test that validity, beyond checking that the code does not crash. We describe an approach to identify rotten green tests by combining simple static and dynamic analyses. Our approach takes into account test helper methods, inherited helpers, and trait compositions, and has been implemented in a tool called DrTest. We have applied DrTest to several test suites in Pharo 7.0, and identified many rotten tests, including some that have been " sleeping " in Pharo for at least 5 years

    The Social Market Economy as a Formula for Peace, Prosperity, and Sustainability

    Get PDF
    The social market economy was developed in Germany during the interwar period amidst political and economic turmoil. With clear demarcation lines differentiating it from socialism and laissez-faire capitalism, the social market economy became a formula for peace and prosperity for post WWII Germany. Since then, the success of the social market economy has inspired many other countries to adopt its principles. Drawing on evidence from economic history and the history of economic thought, this thesis first reviews the evolution of the fundamental principles that form the foundation of social-market economic thought. Blending the micro-economic utility maximization framework with traditional growth theory, I provide theoretical support that aggregate social welfare is maximized in a stylized social market economy. Despite the presence of extensive qualitative research, no attempts have yet been made to measure social market economic performance empirically or to quantify the effects of social market economic principles on peace and prosperity. Thus, I explore potential indicators to develop a social market economic performance index. I provide empirical evidence that supports the notion that the application of social market economic principles carries a social peace dividend, creates more equal opportunity, promotes ecological sustainability, and generates higher per capita incomes. I use the empirical results to build an interactive web application that allows for the simulation, assessment, and visualization of the economic-performance effects of applying social market economic principles to the economies of 165 countries. Lastly, the interactive web application also allows for modification of the social market economic principles and reports the estimated impact on peace and prosperity in these countries
    corecore