61 research outputs found

    Predicting Software Fault Proneness Using Machine Learning

    Get PDF
    Context: Continuous Integration (CI) is a DevOps technique which is widely used in practice. Studies show that its adoption rates will increase even further. At the same time, it is argued that maintaining product quality requires extensive and time consuming, testing and code reviews. In this context, if not done properly, shorter sprint cycles and agile practices entail higher risk for the quality of the product. It has been reported in literature [68], that lack of proper test strategies, poor test quality and team dependencies are some of the major challenges encountered in continuous integration and deployment. Objective: The objective of this thesis, is to bridge the process discontinuity that exists between development teams and testing teams, due to continuous deployments and shorter sprint cycles, by providing a list of potentially buggy or high risk files, which can be used by testers to prioritize code inspection and testing, reducing thus the time between development and release. Approach: Out approach is based on a five step process. The first step is to select a set of systems, a set of code metrics, a set of repository metrics, and a set of machine learning techniques to consider for training and evaluation purposes. The second step is to devise appropriate client programs to extract and denote information obtained from GitHub repositories and source code analyzers. The third step is to use this information to train the models using the selected machine learning techniques. This step allowed to identify the best performing machine learning techniques out of the initially selected in the first step. The fourth step is to apply the models with a voting classifier (with equal weights) and provide answers to five research questions pertaining to the prediction capability and generality of the obtained fault proneness prediction framework. The fifth step is to select the best performing predictors and apply it to two systems written in a completely different language (C++) in order to evaluate the performance of the predictors in a new environment. Obtained Results: The obtained results indicate that a) The best models were the ones applied on the same system as the one trained on; b) The models trained using repository metrics outperformed the ones trained using code metrics; c) The models trained using code metrics were proven not adequate for predicting fault prone modules; d) The use of machine learning as a tool for building fault-proneness prediction models is promising, but still there is work to be done as the models show weak to moderate prediction capability. Conclusion: This thesis provides insights into how machine learning can be used to predict whether a source code file contains one or more faults that may contribute to a major system failure. The proposed approach is utilizing information extracted both from the system’s source code, such as code metrics, and from a series of DevOps tools, such as bug repositories, version control systems and, testing automation frameworks. The study involved five Java and five Python systems and indicated that machine learning techniques have potential towards building models for alerting developers about failure prone code

    Empirically-Grounded Construction of Bug Prediction and Detection Tools

    Get PDF
    There is an increasing demand on high-quality software as software bugs have an economic impact not only on software projects, but also on national economies in general. Software quality is achieved via the main quality assurance activities of testing and code reviewing. However, these activities are expensive, thus they need to be carried out efficiently. Auxiliary software quality tools such as bug detection and bug prediction tools help developers focus their testing and reviewing activities on the parts of software that more likely contain bugs. However, these tools are far from adoption as mainstream development tools. Previous research points to their inability to adapt to the peculiarities of projects and their high rate of false positives as the main obstacles of their adoption. We propose empirically-grounded analysis to improve the adaptability and efficiency of bug detection and prediction tools. For a bug detector to be efficient, it needs to detect bugs that are conspicuous, frequent, and specific to a software project. We empirically show that the null-related bugs fulfill these criteria and are worth building detectors for. We analyze the null dereferencing problem and find that its root cause lies in methods that return null. We propose an empirical solution to this problem that depends on the wisdom of the crowd. For each API method, we extract the nullability measure that expresses how often the return value of this method is checked against null in the ecosystem of the API. We use nullability to annotate API methods with nullness annotation and warn developers about missing and excessive null checks. For a bug predictor to be efficient, it needs to be optimized as both a machine learning model and a software quality tool. We empirically show how feature selection and hyperparameter optimizations improve prediction accuracy. Then we optimize bug prediction to locate the maximum number of bugs in the minimum amount of code by finding the most cost-effective combination of bug prediction configurations, i.e., dependent variables, machine learning model, and response variable. We show that using both source code and change metrics as dependent variables, applying feature selection on them, then using an optimized Random Forest to predict the number of bugs results in the most cost-effective bug predictor. Throughout this thesis, we show how empirically-grounded analysis helps us achieve efficient bug prediction and detection tools and adapt them to the characteristics of each software project

    Architectures for dependable modern microprocessors

    Get PDF
    Η εξέλιξη των ολοκληρωμένων κυκλωμάτων σε συνδυασμό με τους αυστηρούς χρονικούς περιορισμούς καθιστούν την επαλήθευση της ορθής λειτουργίας των επεξεργαστών μία εξαιρετικά απαιτητική διαδικασία. Με κριτήριο το στάδιο του κύκλου ζωής ενός επεξεργαστή, από την στιγμή κατασκευής των πρωτοτύπων και έπειτα, οι τεχνικές ελέγχου ορθής λειτουργίας διακρίνονται στις ακόλουθες κατηγορίες: (1) Silicon Debug: Τα πρωτότυπα ολοκληρωμένα κυκλώματα ελέγχονται εξονυχιστικά, (2) Manufacturing Testing: ο τελικό ποιοτικός έλεγχος και (3) In-field verification: Περιλαμβάνει τεχνικές, οι οποίες διασφαλίζουν την λειτουργία του επεξεργαστή σύμφωνα με τις προδιαγραφές του. Η διδακτορική διατριβή προτείνει τα ακόλουθα: (1) Silicon Debug: Η εργασία αποσκοπεί στην επιτάχυνση της διαδικασίας ανίχνευσης σφαλμάτων και στον αυτόματο εντοπισμό τυχαίων προγραμμάτων που δεν περιέχουν νέα -χρήσιμη- πληροφορία σχετικά με την αίτια ενός σφάλματος. Η κεντρική ιδέα αυτής της μεθόδου έγκειται στην αξιοποίηση της έμφυτης ποικιλομορφίας των αρχιτεκτονικών συνόλου εντολών και στην δυνατότητα από-διαμόρφωσης τμημάτων του κυκλώματος, (2) Manufacturing Testing: προτείνεται μία μέθοδο για την βελτιστοποίηση του έλεγχου ορθής λειτουργίας των πολυνηματικών και πολυπύρηνων επεξεργαστών μέσω της χρήση λογισμικού αυτοδοκιμής, (3) Ιn-field verification: Αναλύθηκε σε βάθος η επίδραση που έχουν τα μόνιμα σφάλματα σε μηχανισμούς αύξησης της απόδοσης. Επιπρόσθετα, προτάθηκαν τεχνικές για την ανίχνευση και ανοχή μόνιμων σφαλμάτων υλικού σε μηχανισμούς πρόβλεψης διακλάδωσης.Technology scaling, extreme chip integration and the compelling requirement to diminish the time-to-market window, has rendered microprocessors more prone to design bugs and hardware faults. Microprocessor validation is grouped into the following categories, based on where they intervene in a microprocessor’s lifecycle: (a) Silicon debug: the first hardware prototypes are exhaustively validated, (b) Μanufacturing testing: the final quality control during massive production, and (c) In-field verification: runtime error detection techniques to guarantee correct operation. The contributions of this thesis are the following: (1) Silicon debug: We propose the employment of deconfigurable microprocessor architectures along with a technique to generate self-checking random test programs to avoid the simulation step and triage the redundant debug sessions, (2) Manufacturing testing: We propose a self-test optimization strategy for multithreaded, multicore microprocessors to speedup test program execution time and enhance the fault coverage of hard errors; and (3) In-field verification: We measure the effect of permanent faults performance components. Then, we propose a set of low-cost mechanisms for the detection, diagnosis and performance recovery in the front-end speculative structures. This thesis introduces various novel methodologies to address the validation challenges posed throughout the life-cycle of a chip

    Integrative assessment of Badland erosion dynamics in the Oltrepo area

    Get PDF
    The present work is the result of three years of investigations on soil erosion forms and features in the Oltrepo Pavese, Northern Apennines, Italy. The aim of the work is to review from a modern and scientific point of view the badlands which crop out in the study area as well as to improve methodologies to study the sediment dynamics in badland areas. Badlands are the result of a complex interaction between sub-surface and surface runoff soil erosion processes and are a hotspot for biodiversity and geodiversity. In addition, badlands have always been a fundamental environment for soil erosion investigations. This work is based on the following four principal steps: i) the geological and structural characterisation of the study area, ii) the description of badland forms and features, iii) a probabilistic approach to determine soil erosion susceptible areas in the Oltrepo Pavese and iv) the assessment of suspended sediment dynamics at catchment scale. This study highlights a complex geological and structural sector of the Northern Apennines characterised by soft sedimentary bedrock materials that are prone to be eroded by running water. Initially, a litho-structural map was assembled, and the geological formations of the study area were grouped according to their lithology. The map represents a homogeneous base of information to classify from lithological point of view the badlands of the study area and will become a raster-base for spatial multilayer analysis. Subsequently, a geological, geomorphological and morphometrical classification of the badlands which crop out in the study area was performed though field survey and detailed terrain analysis based on Digital Terrain Models (DTM). The Oltrepo Pavese badlands were classified in type A and B according to their morphology and vegetation conditions. The badlands show high heterogeneity and can be closely related with melange bedrock, claystone and interstratified rocks. Furthermore, the badlands show the typical characteristics of Apennine badlands even if certain morphological differences were noted. This study also highlights the importance of the rainfall characteristics and land use changes playing an important role in the development and stabilisation of the badland forms and features. The land use change induced by planting operations (afforestation) and the reduction of agricultural activities in the area, as well as the reduction of the precipitation amount leads to a shrinking of badlands. Though a detailed terrain analysis and the application of the Maximum Entropy model (MaxEnt) three susceptibility maps were generated for the badland and rill-interrill erosion forms. The predictor analysis has highlighted that the more important predictors (i.e. lithology, land use, elevation) can significantly explain the diversity between calanchi type A and B. However, less significant predictors e.g. Vertical Distance to Channel Network, Valley Depth and Catchment Area are fundamental to understand the development of the two morphotypes. Finally, the study reveals for the first time, the dynamics between precipitation, discharge and suspended sediment transport in a small watershed basin sited in the Northern Apennines. A laser diffraction instrument was installed at the outlet of a small watershed basin deeply interested by aquatic erosion and the sediment diameter and concentration was evaluated with respect to rainfall. The initial moisture condition, hydrophobicity, vegetation cover, and physical conditions of the basin play a fundamental role in the assessment of sediment dynamics. Finally, the study reveals the importance of a correct land management to reduce badland erosion in the Apennine region.The present work is the result of three years of investigations on soil erosion forms and features in the Oltrepo Pavese, Northern Apennines, Italy. The aim of the work is to review from a modern and scientific point of view the badlands which crop out in the study area as well as to improve methodologies to study the sediment dynamics in badland areas. Badlands are the result of a complex interaction between sub-surface and surface runoff soil erosion processes and are a hotspot for biodiversity and geodiversity. In addition, badlands have always been a fundamental environment for soil erosion investigations. This work is based on the following four principal steps: i) the geological and structural characterisation of the study area, ii) the description of badland forms and features, iii) a probabilistic approach to determine soil erosion susceptible areas in the Oltrepo Pavese and iv) the assessment of suspended sediment dynamics at catchment scale. This study highlights a complex geological and structural sector of the Northern Apennines characterised by soft sedimentary bedrock materials that are prone to be eroded by running water. Initially, a litho-structural map was assembled, and the geological formations of the study area were grouped according to their lithology. The map represents a homogeneous base of information to classify from lithological point of view the badlands of the study area and will become a raster-base for spatial multilayer analysis. Subsequently, a geological, geomorphological and morphometrical classification of the badlands which crop out in the study area was performed though field survey and detailed terrain analysis based on Digital Terrain Models (DTM). The Oltrepo Pavese badlands were classified in type A and B according to their morphology and vegetation conditions. The badlands show high heterogeneity and can be closely related with melange bedrock, claystone and interstratified rocks. Furthermore, the badlands show the typical characteristics of Apennine badlands even if certain morphological differences were noted. This study also highlights the importance of the rainfall characteristics and land use changes playing an important role in the development and stabilisation of the badland forms and features. The land use change induced by planting operations (afforestation) and the reduction of agricultural activities in the area, as well as the reduction of the precipitation amount leads to a shrinking of badlands. Though a detailed terrain analysis and the application of the Maximum Entropy model (MaxEnt) three susceptibility maps were generated for the badland and rill-interrill erosion forms. The predictor analysis has highlighted that the more important predictors (i.e. lithology, land use, elevation) can significantly explain the diversity between calanchi type A and B. However, less significant predictors e.g. Vertical Distance to Channel Network, Valley Depth and Catchment Area are fundamental to understand the development of the two morphotypes. Finally, the study reveals for the first time, the dynamics between precipitation, discharge and suspended sediment transport in a small watershed basin sited in the Northern Apennines. A laser diffraction instrument was installed at the outlet of a small watershed basin deeply interested by aquatic erosion and the sediment diameter and concentration was evaluated with respect to rainfall. The initial moisture condition, hydrophobicity, vegetation cover, and physical conditions of the basin play a fundamental role in the assessment of sediment dynamics. Finally, the study reveals the importance of a correct land management to reduce badland erosion in the Apennine region

    Modeling User-Affected Software Properties for Open Source Software Supply Chains

    Get PDF
    Background: Open Source Software development community relies heavily on users of the software and contributors outside of the core developers to produce top-quality software and provide long-term support. However, the relationship between a software and its contributors in terms of exactly how they are related through dependencies and how the users of a software affect many of its properties are not very well understood. Aim: My research covers a number of aspects related to answering the overarching question of modeling the software properties affected by users and the supply chain structure of software ecosystems, viz. 1) Understanding how software usage affect its perceived quality; 2) Estimating the effects of indirect usage (e.g. dependent packages) on software popularity; 3) Investigating the patch submission and issue creation patterns of external contributors; 4) Examining how the patch acceptance probability is related to the contributors\u27 characteristics. 5) A related topic, the identification of bots that commit code, aimed at improving the accuracy of these and other similar studies was also investigated. Methodology: Most of the Research Questions are addressed by studying the NPM ecosystem, with data from various sources like the World of Code, GHTorrent, and the GiHub API. Different supervised and unsupervised machine learning models, including Regression, Random Forest, Bayesian Networks, and clustering, were used to answer appropriate questions. Results: 1) Software usage affects its perceived quality even after accounting for code complexity measures. 2) The number of dependents and dependencies of a software were observed to be able to predict the change in its popularity with good accuracy. 3) Users interact (contribute issues or patches) primarily with their direct dependencies, and rarely with transitive dependencies. 4) A user\u27s earlier interaction with the repository to which they are contributing a patch, and their familiarity with related topics were important predictors impacting the chance of a pull request getting accepted. 5) Developed BIMAN, a systematic methodology for identifying bots. Conclusion: Different aspects of how users and their characteristics affect different software properties were analyzed, which should lead to a better understanding of the complex interaction between software developers and users/ contributors

    Investigating the Impact of Personal, Temporal and Participation Factors on Code Review Quality

    Get PDF
    La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.Code review is an essential element of any mature software development project; it aims at evaluating code contributions submitted by developers. In principle, code review should improve the quality of code changes (patches) before they are committed to the project's master repository. In practice, the execution of this process can allow bugs to get in unnoticed. In this thesis, we present an empirical study investigating code review of a large open source project. We explore the relationship between reviewers'code inspections and personal, temporal and participation factors that might affect the quality of such inspections. We first report a quantitative study in which we applied the SZZ algorithm to detect bug-inducing changes that were then linked to the code review information extracted from the issue tracking system. We found that the reasons why reviewers miss bugs are related to both their personal characteristics, as well as the technical properties of the patches under review. We then report a qualitative study that aims at soliciting opinions from Mozilla developers on their perception of the attributes associated with a well-done code review. The results of our survey suggest that developers find both technical (patch size, number of chunks, and module) and personal factors (reviewer's experience and review queue) to be strong contributors to the review quality

    Reducing Object-Oriented Testing Cost Through the Analysis of Antipatterns

    Get PDF
    RÉSUMÉ Les tests logiciels sont d’une importance capitale dans nos sociétés numériques. Le bon fonctionnement de la plupart des activités et services dépendent presqu’entièrement de la disponibilité et de la fiabilité des logiciels. Quoique coûteux, les tests logiciels demeurent le meilleur moyen pour assurer la disponibilité et la fiabilité des logiciels. Mais les caractéristiques du paradigme orienté-objet—l’un des paradigmes les plus utilisés—complexifient les activités de tests. Cette thèse est une contribution à l’effort considérable que les chercheurs ont investi ces deux décennies afin de proposer des approches et des techniques qui réduisent les coûts de test des programmes orientés-objet et aussi augmentent leur efficacité. Notre première contribution est une étude empirique qui vise à évaluer l’impact des antipatrons sur le coût des tests unitaires orienté-objet. Les antipatrons sont des mauvaises solutions à des problèmes récurrents de conception et d’implémentation. De nombreuses études empiriques ont montré l’impact négatif des antipatrons sur plusieurs attributs de qualité logicielle notamment la compréhension et la maintenance des programmes. D’autres études ont également révélé que les classes participant aux antipatrons sont plus sujettes aux changements et aux fautes. Néanmoins, aucune étude ne s’est penchée sur l’impact que ces antipatrons pourraient avoir sur les tests logiciels. Les résultats de notre étude montrent que les antipatrons ont également un effet négatif sur le coût des tests : les classes participants aux antipatrons requièrent plus de cas de test que les autres classes. De plus, bien que le test des antipatrons soit coûteux, l’étude révèle aussi que prioriser leur test contribuerait à détecter plutôt les fautes. Notre seconde contribution se base sur les résultats de la première et propose une nouvelle approche au problème d’ordre d’intégration des classes. Ce problème est l’un des principaux défis du test d’intégration des classes. Plusieurs approches ont été proposées pour résoudre ce problème mais la plupart vise uniquement à réduire le coût des stubs quand l’approche que nous proposons vise la réduction du coût des stubs et l’augmentation de la détection précoce des fautes. Pour ce faire, nous priorisons les classes ayant une grande probabilité de défectuosité, comme celles participant aux antipatrons. L’évaluation empirique des performances de notre approche a montré son habilité à trouver des compromis entre les deux objectifs. Comparée aux approches existantes, elle peut donc aider les testeurs à trouver des ordres d’intégration qui augmentent la capacité de détection précoce des fautes tout en minimisant le coût de stubs à développer. Dans notre troisième contribution, nous proposons d’analyser et améliorer l’utilisabilité de Madum, une stratégie de test unitaire spécifique à l’orienté-objet. En effet, les caractéristiques inhérentes à l’orienté-objet ont rendu insuffisants les stratégies de test traditionnelles telles que les tests boîte blanche ou boîte noire. La stratégie Madum, l’une des stratégies proposées pour pallier cette insuffisance, se présente comme une bonne candidate à l’automatisation car elle ne requiert que le code source pour identifier les cas de tests. Automatiser Madum pourrait donc contribuer à mieux tester les classes en général et celles participant aux antipatrons en particulier tout en réduisant les coûts d’un tel test. Cependant, la stratégie de test Madum ne définit pas de critères de couverture. Les critères de couverture sont un préalable à l’automatisation et aussi à la bonne utilisation de la stratégie de test. De plus, l’analyse des fondements de cette stratégie nous montre que l’un des facteurs clés du coût des tests basés sur Madum est le nombre de "transformateurs" (méthodes modifiant la valeur d’un attribut donné). Pour réduire les coûts de tests et faciliter l’utilisation de Madum, nous proposons des restructurations du code qui visent à réduire le nombre de transformateurs et aussi des critères de couverture qui guideront l’identification des données nécessaires à l’utilisation de cette stratégie de test. Ainsi, partant de la connaissance de l’impact des antipatrons sur les tests orientés-objet, nous contributions à réduire les côuts des tests unitaires et d’intégration.----------ABSTRACT Our modern society is highly computer dependent. Thus, the availability and the reliability of programs are crucial. Although expensive, software testing remains the primary means to ensure software availability and reliability. Unfortunately, the main features of the objectoriented paradigm (OO)—one of the most popular paradigms—complicate testing activities. This thesis is a contribution to the global effort to reduce OO software testing cost and to increase its reliability. Our first contribution is an empirical study to gather evidence on the impact of antipatterns on OO unit testing. Antipatterns are recurring and poor design or implementation choices. Past and recent studies showed that antipatterns negatively impact many software quality attributes, such as maintenability and understandability. Other studies also report that antipatterns are more change- and defect-prone than other classes. However, our study is the first regarding the impact of antipatterns on the cost of OO unit testing. The results show that indeed antipatterns have a negative effect on OO unit testing cost: AP classes are in general more expensive to test than other classes. They also reveal that testing AP classes in priority may be cost-effective and may allow detecting most of the defects and early. Our second contribution is a new approach to the problem of class integration test order (CITO) with the goals of minimizing the cost related to the order and increasing early defect detection. The CITO problem is one of the major problems when integrating classes in OO programs. Indeed, the order in which classes are tested during integration determines the cost (stubbing cost) but also the order on which defects are detected. Most approaches proposed to solve the CITO problem focus on minimizing the cost of stubs. In addition to this goal, our approach aims to increase early defect detection apability, which is one of the most important objectives in testing. Early defect detection allows detecting defects early and thus increases the cost-effectiveness of testing. An empirical study shows the superiority of our approach over existing approaches to provide balanced orders: orders that minimize stubbing cost while maximizing early defect detection. In our third contribution, we analyze and improve the usability of Madum testing, one of the unit testing strategies proposed to overcome the limitations of traditional testing when testing OO programs. Opposite to other OO unit testing, Madum testing requires only the source code to identify test cases. Madum testing is thus a good candidate for automation, which is one of the best ways to reduce testing cost and increase reliability. Automatizing Madum testing can help to test thoroughly AP classes while reducing the testing cost. However, Madum testing does not define coverage criteria that are a prerequisite for using the strategy and also automatically generating test data. Moreover, one of the key factors in the cost of using Madum testing is the number of transformers (methods that modify a given attribute). To reduce testing cost and increase the easiness of using Madum testing, we propose refactoring actions to reduce the number of transformers and formal coverage criteria to guide in generating Madum test data. We also formulate the problem of generating test data for Madum testing as a search-based problem. Thus, based on the evidence we gathered from the impact of antipatterns on OO testing, we reduce the cost of OO unit and integration testing

    Proceedings of VVSS2007 - verification and validation of software systems, 23rd March 2007, Eindhoven, The Netherlands

    Get PDF
    corecore