16 research outputs found

    Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    Get PDF
    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

    Confidence Interval Estimation of the Conditional Reliability Function for Time Domain Data

    Get PDF
    The function of conditional reliability gives the probability of successfully implementing another operation following the successful implementation of a previous operation. The prediction of this function can help software developers in determining optimal release times. In this paper, the Maximum Likelihood Estimation (MLE) method is used to estimate the Non-Homogeneous Poisson Process Log-Logistic (NHPP LL) model’s parameters. The upper and the lower bounds of the parameters and conditional reliability function of time domain data are obtained. Real data application is conducted using the coefficient of multiple determination criteria and observed interval length to evaluate the performance of the NHPP LL model and the constructed confidence intervals, respectively. Our results encourage for more assessment of confidence intervals of other measures of reliability of the NHPP models

    Effect of Introduction of Fault and Imperfect Debugging on Release Time

    Get PDF
    One of the most important decisions related to the efficient management of testing phase of software development life cycle is to determine when to stop testing and release the software in the market. Most of the testing processes are imperfect once. In this paper first we have discussed an optimal release time problem for an imperfect faultdebugging model due to Kapur et al considering effect of perfect and imperfect debugging separately on the total expected software cost. Next, we proposed a SRGM incorporating the effect of imperfect fault debugging and error generation. The proposed model is validated on a data set cited in literature and a release time problem is formulated minimizing the expected cost subject to a minimum reliability level to be achieved by the release time using the proposed model. Solution method is discussed to solve such class of problem. A numerical illustration is given for both type of release problem and finally a sensitivity analysis is performed

    The economic impact of public beta testing: the power of word-of-mouth

    Get PDF
    The advent of the Internet has brought many fundamental changes to the way business is conducted. Among others, a growing number of software firms are relying on public beta testing to improve the quality of their products before release. While the benefits resulting from improved software reliability have been widely recognized, the influences of public beta testers on the diffusion of a new software product have not been documented. Through their word-of-mouth effect, public beta testers can speed up the diffusion of a software product after release, and hence increase the time-discounted revenue per adopter. In this research, we take into consideration both the reliability-side and the diffusion-side of the benefits, and develop methodologies to help firms decide the optimal number of public beta testers and the optimal duration of public beta testing. Numerical results show the firm’s profit can increase substantially by taking advantage of the world-of-mouth of public beta testers. This benefit is more significant if firms recruit beta testers from those who can benefit from a software product but cannot afford it

    Marco para evaluar garantía en desarrollos de software

    Get PDF
    Este artículo expone un marco simplificado para estudiar los costos de garantía en el desarrollo de software. Se proponen métodos para obtener los parámetros requeridos por los modelos de confiabilidad citados en la bibliografía desde métricas de proceso comúnmente encontradas en la línea de base de organizaciones desarrolladoras. El marco propuesto es validado mediante técnicas de simulación por el método de Montecarlo para explorar la magnitud de los resultados y la sensibilidad a los parámetros utilizados. Se extraen conclusiones preliminares y se identifican líneas de trabajo futuras.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Simplified framework to evaluate software development warranty

    Get PDF
    This article addresses a simplified framework to evaluate the warranty costs of a software development process. The approach uses parameters required by the models from metrics commonly found associated with a software development project. Methods are proposed to extract and apply organizational baselines. The proposed framework is validated using simulation techniques based on the Monte Carlo method, allowing for the assessment of the likely distribution of the results and the sensitivity with the parameters used. Preliminary conclusions are extracted and future lines of work identified.Sociedad Argentina de Informática e Investigación Operativ

    Análise das diferentes soluções disponíveis para desenvolvimento de aplicações móveis.

    Get PDF
    Desde o lançamento do primeiro iPhone é possível observar que ao longo dos anos a utilização de smartphones pela população mundial, tem crescido (Turner, 2020). E é esperado que a tendência continue a manifestar-se. Com o aumento da utilização de smartphones como meio de acesso à Internet é cada vez mais evidente a disputa pela atenção do utilizador. Nessa “guerra”, travada por todas as organizações cujo negócio revolve à volta da sua presença na Internet, as aplicações móveis são uma das mais eficazes maneiras de manter o utilizador interessado e envolvido com aquilo que as empresas produzem. No entanto existem desafios no mercado relativo a aplicações móveis. Com seu crescimento tem sido notada uma escassez de recursos humanos tais como programadores capazes de trabalhar no desenvolvimento de aplicações móveis. Com isto é natural que as empresas e os seus trabalhadores procurem soluções que permitam um desenvolvimento mais eficiente. Umas das soluções que tem ganho popularidade é o uso de tecnologias que permitam o desenvolvimento de aplicações móveis nativas multiplataforma usando apenas uma base de código. Isto permite aos programadores ter uma eficiência mais elevada. O facto é que muitas dessas tecnologias, que permitem desenvolvimento de aplicações móveis multiplataforma, são relativamente modernas. Isso faz com que ainda não existem dados que as comparem na sua eficiência de desenvolvimento. Esses dados seriam úteis para programadores ou empresas, uma vez que possibilitava a escolha da tecnologia mais eficiente. Neste estudo é desenhada e implementada uma metodologia que permite recolha de dados relativos à eficiência de desenvolvimento de aplicações móveis. A metodologia foi desenhada e implementada para determinar a tecnologia de desenvolvimento de aplicações móveis mais eficiente. Esse objetivo não foi cumprido devido à falta de dados.Since the launch of the first iPhone it is possible to observe that over the years the use of smartphones by the world population has grown (Turner, 2020)). The trend is expected to continue to manifest itself. With the increase in the use of smartphones as a means of Internet access, the dispute for the user's attention is increasingly evident. In this "war", waged by all organizations whose business revolves around their presence on the Internet, mobile applications are one of the most effective ways to keep the user interested and involved with what companies produce. However, there are challenges in the market for mobile applications. With their growth has been noted a shortage of human resources such as programmers capable of working in the development of mobile applications. With this it is natural that companies and their workers look for solutions that allow a more efficient development. One of the solutions that has gained popularity is the use of technologies that allow the development of native multiplatform mobile applications using only one code base. This allows developers to have a higher efficiency. The fact is that many of these technologies, which allow the development of multiplatform mobile applications, are relatively modern. This means that there is still no data to compare them in their development efficiency. This data would be useful for programmers or companies, since it would allow them to choose the most efficient technology. In this study, a methodology is designed and implemented to collect data related to the development efficiency of mobile applications. The methodology was designed and implemented to determine the most efficient mobile application development technology. This objective was not accomplished due to lack of data

    Graph Mining for Software Fault Localization: An Edge Ranking based Approach

    Get PDF
    Fault localization is considered one of the most challenging activities in the software debugging process. It is vital to guarantee software reliability. Hence, there has been a great demand for automated methods that can pinpoint faults for software developers. Various fault localization techniques that are based on graph mining have been proposed in the literature. These techniques rely on detecting discriminative sub-graphs between failing and passing traces. However, these approaches may not be applicable when the fault does not appear in a discriminative pattern. On the other hand, many approaches focus on selecting potentially faulty program components (statements or predicates) and then ranking these components according to their degree of suspiciousness. One of the difficulties encountered by such approaches is to understand the context of fault occurrence. To address these issues, this paper introduces an approach that helps in analyzing the context of execution traces based on control flow graphs. The proposed approach uses the edge-ranking of basic blocks in software programs using Dstar that proved to be more effective than many fault localization techniques. The proposed method helps in detecting some types of faults that could not be previously detected by many other approaches. Using Siemens benchmark, experiments show the effectiveness of the proposed technique compared to some well-known approaches such as Dstar, Tarantula, SOBER, Cause Transition and Liblit05. The percentage of localized faulty versions versus the percentage of code examined is taken as a measure. For instance, when the percentage of examined code is 30%, the proposed technique can localize nearly 81% of the faulty versions, which outperforms the other four techniques

    Analysis of an inflection s-shaped software reliability model considering log-logistic testing-effort and imperfect debugging

    Get PDF
    Gokhale and Trivedi (1998) have proposed the Log-logistic software reliability growth model that can capture the increasing/decreasing nature of the failure occurrence rate per fault. In this paper, we will first show that a Log-logistic testing-effort function (TEF) can be expressed as a software development/testing-effort expenditure curve. We investigate how to incorporate the Log-logistic TEF into inflection S-shaped software reliability growth models based on non-homogeneous Poisson process (NHPP). The models parameters are estimated by least square estimation (LSE) and maximum likelihood estimation (MLE) methods. The methods of data analysis and comparison criteria are presented. The experimental results from actual data applications show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other existing models are also performed. Results show that the proposed models can give fairly better predictions. Therefore, the Log-logistic TEF is suitable for incorporating into inflection S-shaped NHPP growth models. In addition, the proposed models are discussed under imperfect debugging environment

    Graph Mining for Software Fault Localization: An Edge Ranking based Approach

    Get PDF
    Fault localization is considered one of the most challenging activities in the software debugging process. It is vital to guarantee software reliability. Hence, there has been a great demand for automated methods that can pinpoint faults for software developers. Various fault localization techniques that are based on graph mining have been proposed in the literature. These techniques rely on detecting discriminative sub-graphs between failing and passing traces. However, these approaches may not be applicable when the fault does not appear in a discriminative pattern. On the other hand, many approaches focus on selecting potentially faulty program components (statements or predicates) and then ranking these components according to their degree of suspiciousness. One of the difficulties encountered by such approaches is to understand the context of fault occurrence. To address these issues, this paper introduces an approach that helps in analyzing the context of execution traces based on control flow graphs. The proposed approach uses the edge-ranking of basic blocks in software programs using Dstar that proved to be more effective than many fault localization techniques. The proposed method helps in detecting some types of faults that could not be previously detected by many other approaches. Using Siemens benchmark, experiments show the effectiveness of the proposed technique compared to some well-known approaches such as Dstar, Tarantula, SOBER, Cause Transition and Liblit05. The percentage of localized faulty versions versus the percentage of code examined is taken as a measure. For instance, when the percentage of examined code is 30%, the proposed technique can localize nearly 81% of the faulty versions, which outperforms the other four techniques
    corecore