851 research outputs found
Design de fiabilidade bidimensional do software de múltiplos lançamentos tendo em conta o fator de redução de falhas na depuração imperfeita
Introduction: The present research was conducted at the University of Delhi, India in 2017.
Methods: We develop a software reliability growth model to assess the reliability of software products released in multiple versions under limited availability of resources and time. The Fault Reduction Factor (frf) is considered to be constant in imperfect debugging environments while the rate of fault removal is given by Delayed S-Shaped model.
Results: The proposed model has been validated on a real life four-release dataset by carrying out goodness of fit analysis. Laplace trend analysis was also conducted to judge the trend exhibited by data with respect to change in the system’s reliability.
Conclusions: A number of comparison criteria have been calculated to evaluate the performance of the proposed model relative to only time-based multi-release Software Reliability Growth Model (srgm).
Originality: In general, the number of faults removed is not the same as the number of failures experienced in given time intervals, so the inclusion of frf in the model makes it better and more realistic. A paradigm shift has been observed in software development from single release to multi release platform.
Limitations: The proposed model can be used by software developers to take decisions regarding the release time for different versions, by either minimizing the development cost or maximizing the reliability and determining the warranty policies.Introducción: la presente investigación se realizó en la Universidad de Delhi, India en 2017.
Métodos: desarrollamos un modelo de crecimiento de confiabilidad de software para evaluar la confiabilidad de los productos de software lanzados en múltiples versiones bajo disponibilidad limitada de recursos y tiempo. El factor de reducción de fallas (frf) se considera una constante en entornos de depuración imperfecta, mientras que la tasa de eliminación de fallas está dada por el modelo de forma retardada en S.
Resultados: se valida el modelo propuesto en un conjunto de datos de cuatro lanzamientos de la vida real mediante un análisis de bondad de ajuste. También se aplicó el análisis de tendencia de Laplace para juzgar la tendencia que presentan los datos con respecto al cambio en la confiabilidad del sistema.
Conclusiones: se calculó una serie de criterios de comparación para evaluar el rendimiento del modelo propuesto en relación con el modelo de crecimiento de confiabilidad del software (srgm) de múltiples lanzamientos basado únicamente en el tiempo.
Originalidad: en general, el número de fallas eliminadas no es el mismo que el número de fallas experimentadas en intervalos de tiempo determinados, por lo que la inclusión de frf en el modelo lo mejora y lo hace más realista. Se ha observado un cambio de paradigma en el desarrollo de software, que pasa de un lanzamiento único a una plataforma múltiples lanzamientos.
Limitaciones: los desarrolladores de software pueden emplear el modelo propuesto para tomar decisiones con respecto al tiempo de lanzar diferentes versiones, ya sea minimizando el costo de desarrollo o maximizando la confiabilidad y determinando las políticas de la garantía.Introdução: esta pesquisa foi realizada na Universidade de Deli, na Índia, em 2017. Métodos: desenvolvemos um modelo de crescimento de confiabilidade de software para avaliar a confiabilidade dos produtos de software lançados em múltiplas versões sob disponibilidade limitada de recursos e tempo. O fator de redução de falhas (frf) é considerado uma constante em contextos de depuração imperfeita, enquanto a taxa de eliminação de falhas é dada pelo modelo de forma retardada em S.Resultados: o modelo proposto é avaliado em um conjunto de dados de quatro lançamentos da vida real mediante uma análise de bondade de ajuste. Também foi utilizada a análise de tendência de Laplace para avaliar a tendência apresentada pelos dados com respeito à mudança na confiabilidade do sistema.Conclusões: uma série de critérios de comparação foi calculada para avaliar o rendimento do modelo proposto em relação com o modelo de crescimento de confiabilidade do software (srgm) de múltiplos lançamentos baseado unicamente no tempo.Originalidade: em geral, o número de falhas eliminadas não é o mesmo que o número de falhas existentes em intervalos de tempo determinados, sendo assim, a inclusão do frf no modelo o torna melhor e mais realista. Foi observada uma mudança de paradigma no desenvolvimento de software, que passa de um lançamento único a uma plataforma de múltiplos lançamentos.Limitações: o modelo proposto pode ser utilizado pelos desenvolvedores de software para tomar decisões com respeito ao tempo de lançar diferentes versões, seja para minimizar o custo de desenvolvimento ou maximizar a confiabilidade e determinar as políticas de garantia
A Bayesian modification to the Jelinski-Moranda software reliability growth model
The Jelinski-Moranda (JM) model for software reliability was examined. It is suggested that a major reason for the poor results given by this model is the poor performance of the maximum likelihood method (ML) of parameter estimation. A reparameterization and Bayesian analysis, involving a slight modelling change, are proposed. It is shown that this new Bayesian-Jelinski-Moranda model (BJM) is mathematically quite tractable, and several metrics of interest to practitioners are obtained. The BJM and JM models are compared by using several sets of real software failure data collected and in all cases the BJM model gives superior reliability predictions. A change in the assumption which underlay both models to present the debugging process more accurately is discussed
Recommended from our members
On the use of testability measures for dependability assessment
Program “testability” is informally, the probability that a program will fail under test if it contains at least one fault. When a dependability assessment has to be derived from the observation of a series of failure free test executions (a common need for software subject to “ultra high reliability” requirements), measures of testability can-in theory-be used to draw inferences on program correctness. We rigorously investigate the concept of testability and its use in dependability assessment, criticizing, and improving on, previously published results. We give a general descriptive model of program execution and testing, on which the different measures of interest can be defined. We propose a more precise definition of program testability than that given by other authors, and discuss how to increase testing effectiveness without impairing program reliability in operation. We then study the mathematics of using testability to estimate, from test results: the probability of program correctness and the probability of failures. To derive the probability of program correctness, we use a Bayesian inference procedure and argue that this is more useful than deriving a classical “confidence level”. We also show that a high testability is not an unconditionally desirable property for a program. In particular, for programs complex enough that they are unlikely to be completely fault free, increasing testability may produce a program which will be less trustworthy, even after successful testin
A Review of Software Reliability Testing Techniques
In the era of intelligent systems, the safety and reliability of software have received more attention. Software reliability testing is a significant method to ensure reliability, safety and quality of software. The intelligent software technology has not only offered new opportunities but also posed challenges to software reliability technology. The focus of this paper is to explore the software reliability testing technology under the impact of intelligent software technology. In this study, the basic theories of traditional software and intelligent software reliability testing were investigated via related previous works, and a general software reliability testing framework was established. Then, the technologies of software reliability testing were analyzed, including reliability modeling, test case generation, reliability evaluation, testing criteria and testing methods. Finally, the challenges and opportunities of software reliability testing technology were discussed at the end of this paper
Availability and Reliability Analysis of Computer Software Systems Considering Maintenance and Security Issues
Ph.DDOCTOR OF PHILOSOPH
Software Reliability Growth Model with Partial Differential Equation for Various Debugging Processes
Most Software Reliability Growth Models (SRGMs) based on the Nonhomogeneous Poisson Process (NHPP) generally assume perfect or imperfect debugging. However, environmental factors introduce great uncertainty for SRGMs in the development and testing phase. We propose a novel NHPP model based on partial differential equation (PDE), to quantify the uncertainties associated with perfect or imperfect debugging process. We represent the environmental uncertainties collectively as a noise of arbitrary correlation. Under the new stochastic framework, one could compute the full statistical information of the debugging process, for example, its probabilistic density function (PDF). Through a number of comparisons with historical data and existing methods, such as the classic NHPP model, the proposed model exhibits a closer fitting to observation. In addition to conventional focus on the mean value of fault detection, the newly derived full statistical information could further help software developers make decisions on system maintenance and risk assessment
Hybrid Software Reliability Model for Big Fault Data and Selection of Best Optimizer Using an Estimation Accuracy Function
Software reliability analysis has come to the forefront of academia as software applications have grown in size and complexity. Traditionally, methods have focused on minimizing coding errors to guarantee analytic tractability. This causes the estimations to be overly optimistic when using these models. However, it is important to take into account non-software factors, such as human error and hardware failure, in addition to software faults to get reliable estimations. In this research, we examine how big data systems' peculiarities and the need for specialized hardware led to the creation of a hybrid model. We used statistical and soft computing approaches to determine values for the model's parameters, and we explored five criteria values in an effort to identify the most useful method of parameter evaluation for big data systems. For this purpose, we conduct a case study analysis of software failure data from four actual projects. In order to do a comparison, we used the precision of the estimation function for the results. Particle swarm optimization was shown to be the most effective optimization method for the hybrid model constructed with the use of large-scale fault data
Recommended from our members
Guidelines for Statistical Testing
This document provides an introduction to statistical testing. Statistical testing of software is here defined as testing in which the test cases are produced by a random process meant to produce different test cases with the same probabilities with which they would arise in actual use of the software. Statistical testing of software has these main advantages: for the purpose of reliability assessment and product acceptance, it supports directly estimates of reliability, and thus decisions on whether the software is ready for delivery or for use in a specific system. This feature is unique to statistical testing; for the purpose of improving the software, it tends to discover defects which would cause failures with the higher frequencies before those that would cause less frequent failures, thus focusing correction efforts in the most cost-effective way and delivering better software for a given debugging effort. Statistical testing has been reported to achieve dramatic improvements; from the point of view of costs, it facilitates the automation of the test process, thus allowing more testing at acceptable cost than manual testing would allow. This document explains the basic theory underlying statistical testing and provides guidance for its application. The material is organised to facilitate use both as an introduction for software engineers who are new to this approach to testing, and as a reference source during application. Statistical testing is applicable to practically all kinds of software, so this document is not markedly specialised for space applications, though the examples are mostly space-related and the discussion of the software lifecycle is meant to apply to common practice among ESA suppliers
- …