10 research outputs found

    Using the ISO/IEC 9126 product quality model to classify defects : a Controlled Experiment

    Get PDF
    Background: Existing software defect classification schemes support multiple tasks, such as root cause analysis and process improvement guidance. However, existing schemes do not assist in assigning defects to a broad range of high level software goals, such as software quality characteristics like functionality, maintainability, and usability. Aim: We investigate whether a classification based on the ISO/IEC 9126 software product quality model is reliable and useful to link defects to quality aspects impacted. Method: Six different subjects, divided in two groups with respect to their expertise, classified 78 defects from an industrial web application using the ISO/IEC 9126 quality main characteristics and sub-characteristics, and a set of proposed extended guidelines. Results: The ISO/IEC 9126 model is reasonably reliable when used to classify defects, even using incomplete defect reports. Reliability and variability is better for the six high level main characteristics of the model than for the 22 sub- characteristics. Conclusions: The ISO/IEC 9126 software quality model provides a solid foundation for defect classification. We also recommend, based on the follow up qualitative analysis performed, to use more complete defect reports and tailor the quality model to the context of us

    Reliability growth of open source software using defect analysis

    Full text link
    We examine two active and popular open source products to observe whether or not open source software has a different defect arrival rate than software developed in-house. The evaluation used two common models of reliability growth models; concave and S-shaped and this analysis shows that open source has a different profile of defect arrival. Further investigation indicated that low level design instability is a possible explanation of the different defect growth profile. © 2008 IEEE

    A comparison of the reliability growth of open source and in-House software

    Full text link
    As commercial developers have established processes to assure software quality, open source software depends largely on community usage and defect reporting to achieve some level of quality. Thus, quality of open source software may vary. We examined defects reported in two active and popular open source software projects and an in-house project. The results of this analysis indicate that the reliability growth of each is quite distinct and that the defect profile of open source software appears to be a consequence of the open source software development method itself. © 2008 IEEE

    Defect Categorization: Making Use of a Decade of Widely Varying Historical Data

    Get PDF
    This paper describes our experience in aggregating a number of historical datasets containing inspection defect data using different categorizing schemes. Our goal was to make use of the historical data by creating models to guide future development projects. We describe our approach to reconciling the different choices used in the historical datasets to categorize defects, and the challenges we faced. We also present a set of recommendations for others involved in classifying defects

    MiSFIT: Mining Software Fault Information and Types

    Get PDF
    As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes

    Review of Quantitative Software Reliability Methods

    Get PDF
    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of digital systems using dynamic PRA methods. These efforts, documented in NUREG/CR-6901, NUREG/CR-6942, and NUREG/CR-6985, included a functional representation of the system's software but did not explicitly address failure modes caused by software defects or by inadequate design requirements. An important identified research need is to establish a commonly accepted basis for incorporating the behavior of software into digital I&C system reliability models for use in PRAs. To address this need, BNL is exploring the inclusion of software failures into the reliability models of digital I&C systems, such that their contribution to the risk of the associated NPP can be assessed

    Envelhecimento de software utilizando ensaios de vida acelerados quantitativos

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Programa de Pós-Graduação em Engenharia de ProduçãoEste trabalho apresenta uma abordagem sistematizada para acelerar o tempo de vida de sistemas que são acometidos pelos efeitos do envelhecimento de software. Estudos de confiabilidade voltados para estes sistemas necessitam realizar a observação dos tempos de falhas causadas pelo envelhecimento de software, o que exige experimentos de longa duração. Esta exigência cria diversas restrições, principalmente quando o tempo de experimentação implica em prazos e custos proibitivos para o estudo. Neste sentido, este trabalho apresenta uma proposta para acelerar a vida de sistemas que falham por envelhecimento de software, reduzindo o tempo de experimentação necessário para observar as suas falhas, o que reduz os prazos e custos das pesquisas nesta área. A fundamentação teórica deste estudo contou com um arcabouço conceitual envolvendo as áreas de dependabilidade computacional, engenharia de confiabilidade, projeto de experimentos, ensaios de vida acelerados e o estudo da fenomenologia do envelhecimento de software. A técnica de aceleração adotada foi a de ensaios de degradação acelerados, a qual tem sido largamente utilizada em diversas áreas da indústria, mas até o momento não tinha sido usada em estudos envolvendo produtos de software. A elaboração dos meios que permitiram aplicar esta técnica no âmbito da engenharia de software experimental, abordando especialmente o problema do envelhecimento de software, é a principal contribuição desta pesquisa. Em conjunto com a fundamentação teórica foi possível avaliar a aplicabilidade do método proposto a partir de um estudo de caso real, envolvendo a aceleração do envelhecimento de um software servidor web. Dentre os principais resultados obtidos no estudo experimental, destaca-se a identificação dos tratamentos que mais contribuíram para o envelhecimento do software servidor web. A partir destes tratamentos foi possível definir o padrão de carga de trabalho que mais influenciou no envelhecimento do servidor web analisado, sendo que o tipo e tamanho de páginas requisitadas foram os dois fatores mais significativos. Outro resultado importante diz respeito à verificação de que a variação na taxa de requisições do servidor web não influenciou o seu envelhecimento. Com relação à redução no período de experimentação, o método proposto apresentou o menor tempo em comparação aos valores previamente reportados na literatura para experimentos similares, tendo sido 3,18 vezes inferior ao menor tempo encontrado. Em termos de MTBF estimado, com e sem a aceleração do envelhecimento, obteve-se uma redução de aproximadamente 687 vezes no tempo de experimentação aplicando-se o método proposto. This research work presents a systematic approach to accelerate the lifetime of systems that fail due to the software aging effects. Reliability engineering studies applied to systems that require the observation of time to failures caused by software aging normally require a long observation period. This requirement introduces several practical constraints, mainly when the experiment duration demands prohibitive time and cost. The present work shows a proposal to accelerate the lifetime of systems that fail due to software aging, reducing the experimentation time to observe their failures, which means smaller time and costs for research works in this area. The theoretical fundamentals used by the proposed method were based on concepts of the following areas: computing dependability, reliability engineering, design of experiments, accelerated life tests and the software aging phenomenology. The lifetime acceleration technique adopted was the quantitative accelerated degradation test. This technique is largely used in several industry areas, however until the moment it hadn't been used in the software engineering field. The specification of means that allowed applying this technique to the experimental software engineering area, especially to approach the software aging problem, it is considered the main contribution of this research work. Also, it was possible to evaluate the applicability of the proposed method in a case study related to the software aging acceleration of a real web server. An important result was the identification of treatments that contributed to the web server aging. Based on these treatments was possible to define a workload standard that most influenced the aging effects on the web server analyzed, where the page size and page type were two significant factors. Another important result of this case study is regarding the request rate variability, that hadn't influence on the aging of the investigated web server software. Regarding the reduction of the experimentation period, the proposed method showed a shorter duration than values from similar experiments previously published, being 3.18 times less than the shorter experimentation time found in the literature. In terms of MTBF estimates, obtained with and without the aging acceleration, it was possible to achieve a reduction of approximately 687 times of the experimentation time using the proposed method

    Aus Fehlern in der Softwareentwicklung lernen. Wie durch Fehleranalysen die Prozesse der Anforderungsanalyse und der Qualitätssicherung verbessert werden können

    Get PDF
    Softwarefehler existieren, seit Menschen Software entwickeln. Fehler können mitunter zu erheblichen wirtschaftlichen Verlusten und im schlimmsten Fall zum Verlust von Leben führen. Viele Fehler können auf Mängel im Prozess der Anforderungsanalyse zurückgeführt werden. Je später ein Anforderungsfehler entdeckt und behoben wird, desto aufwändiger wird die Korrektur. Die vorliegende Arbeit beschreibt, wie aus Fehlern in der Softwareentwicklung gelernt werden kann. Sie beschreibt ein Verfahren zu Fehleranalyse, auf dessen Basis insbesondere Prozesse der Anforderungsanalyse und der Qualitätssicherung verbessert werden können. Ziel der Verbesserungen ist es, Anforderungsfehler und mögliche Folgefehler im Entwurf und der Implementierung zu vermeiden oder zumindest früher zu finden. In dieser Arbeit wird zunächst ein Modell hergeleitet, das erklärt, warum Anforderungsfehler entstehen. Für bestimmte Typen von Anforderungsfehlern werden auf der Grundlage empirische Befunde konkrete Ursachen im Prozess der Anforderungsanalyse aufgezeigt. Dieses Erklärungsmodell ist Bestandteil eines Verfahrens zur Fehleranalyse, das den Anspruch erhebt, über die Auswertung von Fehlern Rückschlüsse über mögliche Ursachen im Prozess zu ziehen. Das Verfahren ist eine Weiterentwicklung der Orthogonal Defect Classification, kurz ODC. ODC wird in der Arbeit ausführlich dargestellt und auf der Grundlage empirischer Befunde kritisch gewürdigt. Das weiterentwickelte Verfahren zur Fehleranalyse wurde im Rahmen einer einjährigen Fallstudie bei dem IT-Dienstleister einer großen deutschen Versicherung erfolgreich angewandt. Hierbei wurden nachträglich reale Fehler von zwei Softwareentwicklungsprojekten einer geschäftskritischen Anwendungssoftware klassifiziert und analysiert, um Verbesserungspotenziale zu identifizieren. Das in der Arbeit entwickelte Verfahren zur Fehleranalyse leistet einen unmittelbaren Beitrag zur Lösung des aufgezeigten Praxisproblems: sie ist ein Instrument, um Prozessmängel der Anforderungsanalyse zu identifizieren, die systematisch Anforderungsfehler und Folgefehler verursachen
    corecore