16 research outputs found

    Design de fiabilidade bidimensional do software de múltiplos lançamentos tendo em conta o fator de redução de falhas na depuração imperfeita

    Get PDF
    Introduction: The present research was conducted at the University of Delhi, India in 2017. Methods: We develop a software reliability growth model to assess the reliability of software products released in multiple versions under limited availability of resources and time. The Fault Reduction Factor (frf) is considered to be constant in imperfect debugging environments while the rate of fault removal is given by Delayed S-Shaped model. Results: The proposed model has been validated on a real life four-release dataset by carrying out goodness of fit analysis. Laplace trend analysis was also conducted to judge the trend exhibited by data with respect to change in the system’s reliability. Conclusions: A number of comparison criteria have been calculated to evaluate the performance of the proposed model relative to only time-based multi-release Software Reliability Growth Model (srgm). Originality: In general, the number of faults removed is not the same as the number of failures experienced in given time intervals, so the inclusion of frf in the model makes it better and more realistic. A paradigm shift has been observed in software development from single release to multi release platform. Limitations: The proposed model can be used by software developers to take decisions regarding the release time for different versions, by either minimizing the development cost or maximizing the reliability and determining the warranty policies.Introducción: la presente investigación se realizó en la Universidad de Delhi, India en 2017. Métodos: desarrollamos un modelo de crecimiento de confiabilidad de software para evaluar la confiabilidad de los productos de software lanzados en múltiples versiones bajo disponibilidad limitada de recursos y tiempo. El factor de reducción de fallas (frf) se considera una constante en entornos de depuración imperfecta, mientras que la tasa de eliminación de fallas está dada por el modelo de forma retardada en S. Resultados: se valida el modelo propuesto en un conjunto de datos de cuatro lanzamientos de la vida real mediante un análisis de bondad de ajuste. También se aplicó el análisis de tendencia de Laplace para juzgar la tendencia que presentan los datos con respecto al cambio en la confiabilidad del sistema. Conclusiones: se calculó una serie de criterios de comparación para evaluar el rendimiento del modelo propuesto en relación con el modelo de crecimiento de confiabilidad del software (srgm) de múltiples lanzamientos basado únicamente en el tiempo. Originalidad: en general, el número de fallas eliminadas no es el mismo que el número de fallas experimentadas en intervalos de tiempo determinados, por lo que la inclusión de frf en el modelo lo mejora y lo hace más realista. Se ha observado un cambio de paradigma en el desarrollo de software, que pasa de un lanzamiento único a una plataforma múltiples lanzamientos. Limitaciones: los desarrolladores de software pueden emplear el modelo propuesto para tomar decisiones con respecto al tiempo de lanzar diferentes versiones, ya sea minimizando el costo de desarrollo o maximizando la confiabilidad y determinando las políticas de la garantía.Introdução: esta pesquisa foi realizada na Universidade de Deli, na Índia, em 2017. Métodos: desenvolvemos um modelo de crescimento de confiabilidade de software para avaliar a confiabilidade dos produtos de software lançados em múltiplas versões sob disponibilidade limitada de recursos e tempo. O fator de redução de falhas (frf) é considerado uma constante em contextos de depuração imperfeita, enquanto a taxa de eliminação de falhas é dada pelo modelo de forma retardada em S.Resultados: o modelo proposto é avaliado em um conjunto de dados de quatro lançamentos da vida real mediante uma análise de bondade de ajuste. Também foi utilizada a análise de tendência de Laplace para avaliar a tendência apresentada pelos dados com respeito à mudança na confiabilidade do sistema.Conclusões: uma série de critérios de comparação foi calculada para avaliar o rendimento do modelo proposto em relação com o modelo de crescimento de confiabilidade do software (srgm) de múltiplos lançamentos baseado unicamente no tempo.Originalidade: em geral, o número de falhas eliminadas não é o mesmo que o número de falhas existentes em intervalos de tempo determinados, sendo assim, a inclusão do frf no modelo o torna melhor e mais realista. Foi observada uma mudança de paradigma no desenvolvimento de software, que passa de um lançamento único a uma plataforma de múltiplos lançamentos.Limitações: o modelo proposto pode ser utilizado pelos desenvolvedores de software para tomar decisões com respeito ao tempo de lançar diferentes versões, seja para minimizar o custo de desenvolvimento ou maximizar a confiabilidade e determinar as políticas de garantia

    Software Reliability Growth Model with Partial Differential Equation for Various Debugging Processes

    Get PDF
    Most Software Reliability Growth Models (SRGMs) based on the Nonhomogeneous Poisson Process (NHPP) generally assume perfect or imperfect debugging. However, environmental factors introduce great uncertainty for SRGMs in the development and testing phase. We propose a novel NHPP model based on partial differential equation (PDE), to quantify the uncertainties associated with perfect or imperfect debugging process. We represent the environmental uncertainties collectively as a noise of arbitrary correlation. Under the new stochastic framework, one could compute the full statistical information of the debugging process, for example, its probabilistic density function (PDF). Through a number of comparisons with historical data and existing methods, such as the classic NHPP model, the proposed model exhibits a closer fitting to observation. In addition to conventional focus on the mean value of fault detection, the newly derived full statistical information could further help software developers make decisions on system maintenance and risk assessment

    Software Reliability Growth Models from the Perspective of Learning Effects and Change-Point.

    Get PDF
    Increased attention towards reliability of software systems has led to the thorough analysis of the process of reliability growth for prediction and assessment of software reliability in the testing or debugging phase. With many frameworks available in terms of the underlying probability distributions like Poisson process, Non-Homogeneous Poisson Process (NHPP), Weibull, etc, many researchers have developed models using the Non-Homogeneous Poisson Process (NHPP) analytical framework. The behavior of interest, usually, is S-shaped or exponential shaped. S-shaped behavior could relate more closely to the human learning. The need to develop different models stems from the fact that nature of the underlying environment, learning effect acquisition during testing, resource allocations, application and the failure data itself vary. There is no universal model that fits everywhere to be called an Oracle. Learning effects that stem from the experiences of the testing or debugging staff have been considered for the growth of reliability. Learning varies over time and this asserts need for conduct of more research for study of learning effects.Digital copy of ThesisUniversity of Kashmi

    Fault detection and correction modeling of software systems

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Change Request Prediction and Effort Estimation in an Evolving Software System

    Get PDF
    Prediction of software defects has been the focus of many researchers in empirical software engineering and software maintenance because of its significance in providing quality estimates from the project management perspective for an evolving legacy system. Software Reliability Growth Models (SRGM) have been used to predict future defects in a software release. Modern software engineering databases contain Change Requests (CR), which include both defects and other maintenance requests. Our goal is to use defect prediction methods to help predict CRs in an evolving legacy system. Limited research has been done in defect prediction using curve-fitting methods evolving software systems, with one or more change-points. Curve-fitting approaches have been successfully used to select a fitted reliability model among candidate models for defect prediction. This work demonstrates the use of curve-fitting defect prediction methods to predict CRs. It focuses on providing a curve-fit solution that deals with evolutionary software changes but yet considers long-term prediction of data in the full release. We compare three curve-fit solutions in terms of their ability to predict CRs. Our data show that the Time Transformation approach (TT) provides more accurate CR predictions and fewer under-predicted Change Requests than the other curve-fitting methods. In addition to CR prediction, we investigated the possibility of estimating effort as well. We found Lines of Code (added, deleted, modified, and auto-generated) associated with CRs do not necessarily predict the actual effort spent on CR resolution

    Mathematics in Software Reliability and Quality Assurance

    Get PDF
    This monograph concerns the mathematical aspects of software reliability and quality assurance and consists of 11 technical papers in this emerging area. Included are the latest research results related to formal methods and design, automatic software testing, software verification and validation, coalgebra theory, automata theory, hybrid system and software reliability modeling and assessment

    Review of Quantitative Software Reliability Methods

    Full text link

    Software reliability modeling and release time determination

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Geometric Approaches to Statistical Defect Prediction and Learning

    Get PDF
    Software quality is directly correlated with the number of defects in software systems. As the complexity of software increases, manual inspection of software becomes prohibitively expensive. Thus, defect prediction is of paramount importance to project managers in allocating the limited resources effectively as well as providing many advantages such as the accurate estimation of project costs and schedules. This thesis addresses the issues of defect prediction and learning in the geometric framework using statistical quality control and genetic algorithms. A software defect prediction model using the geometric concept of operating characteristic curves is proposed. The main idea behind this predictor is to use geometric insight in helping construct an efficient prediction method to reliably predict the cumulative number of defects during the software development process. The performance of the proposed approach is validated on real data from actual software projects, and the experimental results demonstrate a much improved performance of the proposed statistical method in predicting defects. In the same vein, two defect learning predictors based on evolutionary algorithms are also proposed. These predictors use genetic programming as feature constructor method. The first predictor constructs new features based primarily on the geometrical characteristics of the original data. Then, an independent classifier is applied and the performance of feature selection method is measured. The second predictor uses a built-in classifier which automatically gets tuned for the constructed features. Experimental results on a NASA static metric dataset demonstrate the feasibility of the proposed genetic programming based approaches
    corecore