4,427 research outputs found

    A Second Replicated Quantitative Analysis of Fault Distributions in Complex Software Systems

    Get PDF
    Background. Software engineering is in search for general principles that apply across contexts, for example to help guide software quality assurance. Fenton and Ohlsson presented such observations on fault distributions, which have been replicated once. Objectives.We intend to replicate their study a second time in a new environment. Method.We conducted a close replication, collecting defect data from five consecutive releases of a large software system in the telecommunications domain, and conducted the same analysis as in the original study. Results. The replication confirms results on un-evenly distributed faults over modules, and that fault proneness distribution persist over test phases. Size measures are not useful as predictors of fault proneness, while fault densities are of the same order of magnitude across releases and contexts. Conclusions. This replication confirms that the un-even distribution of defects motivates un-even distribution of quality assurance efforts, although predictors for such distribution of efforts are not sufficiently precise

    Evaluating defect prediction approaches: a benchmark and an extensive comparison

    Get PDF
    Reliably predicting software defects is one of the holy grails of software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available dataset consisting of several software systems, and provide an extensive comparison of well-known bug prediction approaches, together with novel approaches we devised. We evaluate the performance of the approaches using different performance indicators: classification of entities as defect-prone or not, ranking of the entities, with and without taking into account the effort to review an entity. We performed three sets of experiments aimed at (1) comparing the approaches across different systems, (2) testing whether the differences in performance are statistically significant, and (3) investigating the stability of approaches across different learners. Our results indicate that, while some approaches perform better than others in a statistically significant manner, external validity in defect prediction is still an open problem, as generalizing results to different contexts/learners proved to be a partially unsuccessful endeavo

    Predicting software faults in large space systems using machine learning techniques

    Get PDF
    Recently, the use of machine learning (ML) algorithms has proven to be of great practical value in solving a variety of engineering problems including the prediction of failure, fault, and defect-proneness as the space system software becomes complex. One of the most active areas of recent research in ML has been the use of ensemble classifiers. How ML techniques (or classifiers) could be used to predict software faults in space systems, including many aerospace systems is shown, and further use ensemble individual classifiers by having them vote for the most popular class to improve system software fault-proneness prediction. Benchmarking results on four NASA public datasets show the Naive Bayes classifier as more robust software fault prediction while most ensembles with a decision tree classifier as one of its components achieve higher accuracy rates

    Cost modelling and concurrent engineering for testable design

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.As integrated circuits and printed circuit boards increase in complexity, testing becomes a major cost factor of the design and production of the complex devices. Testability has to be considered during the design of complex electronic systems, and automatic test systems have to be used in order to facilitate the test. This fact is now widely accepted in industry. Both design for testability and the usage of automatic test systems aim at reducing the cost of production testing or, sometimes, making it possible at all. Many design for testability methods and test systems are available which can be configured into a production test strategy, in order to achieve high quality of the final product. The designer has to select from the various options for creating a test strategy, by maximising the quality and minimising the total cost for the electronic system. This thesis presents a methodology for test strategy generation which is based on consideration of the economics during the life cycle of the electronic system. This methodology is a concurrent engineering approach which takes into account all effects of a test strategy on the electronic system during its life cycle by evaluating its related cost. This objective methodology is used in an original test strategy planning advisory system, which allows for test strategy planning for VLSI circuits as well as for digital electronic systems. The cost models which are used for evaluating the economics of test strategies are described in detail and the test strategy planning system is presented. A methodology for making decisions which are based on estimated costing data is presented. Results of using the cost models and the test strategy planning system for evaluating the economics of test strategies for selected industrial designs are presented

    Time for Addressing Software Security Issues: Prediction Models and Impacting Factors

    Get PDF
    Finding and fixing software vulnerabilities have become a major struggle for most software development companies. While generally without alternative, such fixing efforts are a major cost factor, which is why companies have a vital interest in focusing their secure software development activities such that they obtain an optimal return on this investment. We investigate, in this paper, quantitatively the major factors that impact the time it takes to fix a given security issue based on data collected automatically within SAP’s secure development process, and we show how the issue fix time could be used to monitor the fixing process. We use three machine learning methods and evaluate their predictive power in predicting the time to fix issues. Interestingly, the models indicate that vulnerability type has less dominant impact on issue fix time than previously believed. The time it takes to fix an issue instead seems much more related to the component in which the potential vulnerability resides, the project related to the issue, the development groups that address the issue, and the closeness of the software release date. This indicates that the software structure, the fixing processes, and the development groups are the dominant factors that impact the time spent to address security issues. SAP can use the models to implement a continuous improvement of its secure software development process and to measure the impact of individual improvements. The development teams at SAP develop different types of software, adopt different internal development processes, use different programming languages and platforms, and are located in different cities and countries. Other organizations, may use the results—with precaution—and be learning organizations

    Predicting software faults in large space systems using machine learning techniques

    Get PDF
    Recently, the use of machine learning (ML) algorithms has proven to be of great practical value in solving a variety of engineering problems including the prediction of failure, fault, and defect-proneness as the space system software becomes complex. One of the most active areas of recent research in ML has been the use of ensemble classifiers. How ML techniques (or classifiers) could be used to predict software faults in space systems, including many aerospace systems is shown, and further use ensemble individual classifiers by having them vote for the most popular class to improve system software fault-proneness prediction. Benchmarking results on four NASA public datasets show the Naive Bayes classifier as more robust software fault prediction while most ensembles with a decision tree classifier as one of its components achieve higher accuracy rates

    Ion Irradiation-induced Microstructural Change in SiC

    Get PDF
    The high temperature radiation resistance of nuclear materials has become a key issue in developing future nuclear reactors. Because of its mechanical stability under high-energy neutron irradiation and high temperature, silicon carbide (SiC) has great potential as a structural material in advanced nuclear energy systems. A newly developed nano-engineered (NE) 3C SiC with a nano-layered stacking fault (SFs) structure has been recently considered as a prospective choice due to enhanced point defect annihilation between layer-type structures, leading to outstanding radiation durability. The objective of this project was to advance the understanding of gas bubble formation mechanisms under irradiation conditions in SiC. In this work, microstructural evolution induced by helium implantation and ion irradiation was investigated in single crystal and NE SiC. Elastic recoil detection analysis confirmed that the as-implanted helium depth profile did not change under irradiation to 30 dpa at 700 °C. Helium bubbles were found in NE SiC after heavy ion irradiation at a lower temperature than in previous literature results. These results expand the current understanding of helium migration mechanism of NE SiC under high temperature irradiation environment. No obvious bubble growth was observed after ion irradiation at 700 °C, suggesting a long helium bubble incubation process under continued irradiation at this temperature and dose. As determined by electron energy loss spectroscopy measurements, only 1 % of the implanted helium atoms are trapped in bubbles. Helium redistribution and release was observed in the TEM samples under in-situ irradiation at 800 °C. In-situ TEM analysis revealed that the nano-layered SF structure is radiation tolerant below a dose of about 15 dpa at 800 °C, but continued irradiation to 20 dpa under these in-situ conditions leads to loss of the stacking fault structure, which may be a manifestation of irradiating thin TEM foils. The irradiation stability of the SF structure under bulk irradiation remains unknown. This stacking fault structure is critical since it suppresses the formation of dislocation loops normally observed under these irradiation conditions. Systematic studies towards understanding the role of defect migration under irradiation on the evolution of helium bubbles in NE SiC were performed

    Proceedings of the Twenty-Third Annual Software Engineering Workshop

    Get PDF
    The Twenty-third Annual Software Engineering Workshop (SEW) provided 20 presentations designed to further the goals of the Software Engineering Laboratory (SEL) of the NASA-GSFC. The presentations were selected on their creativity. The sessions which were held on 2-3 of December 1998, centered on the SEL, Experimentation, Inspections, Fault Prediction, Verification and Validation, and Embedded Systems and Safety-Critical Systems
    • …
    corecore