26 research outputs found
A study of fault prediction and reliability assessment in the SEL environment
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software
Recommended from our members
Using a Log-normal Failure Rate Distribution for Worst Case Bound Reliability Prediction
Prior research has suggested that the failure rates of faults follow a log normal distribution. We propose a specific model where distributions close to a log normal arise naturally from the program structure. The log normal distribution presents a problem when used in reliability growth models as it is not mathematically tractable. However we demonstrate that a worst case bound can be estimated that is less pessimistic than our earlier worst case bound theory
Comparing global optimization and default settings of stream-based joins
One problem encountered in real-time data integration is the join of a continuous incoming data stream with a disk-based relation. In this paper we investigate a stream-based join algorithm, called mesh join (MESHJOIN), and focus on a critical component in the algorithm, called the disk-buffer. In MESHJOIN the size of disk-buffer varies with a change in total memory budget and tuning is required to get the maximum service rate within limited available memory. Until now there was little data on the position of the optimum value depending on the memory size, and no performance comparison has been carried out between the optimum and reasonable default sizes for the disk-buffer. To avoid tuning, we propose a reasonable default value for the disk-buffer size with a small and acceptable performance loss. The experimental results validate our arguments
Does the \u27Golidlocks Conjecture\u27 Apply to Software Reuse?
Adopters of corporate software reuse programs face important decisions with respect to the size of components added to the reuse repository. Large components offer substantial savings when reused but limited opportunity for reuse; small components afford greater opportunity for reuse, but with less payoff. This suggests the possibility of an “optimal” component size, where the reuse benefit is at a maximum. In the software engineering discipline, this relationship – termed the Goldilocks Principle - has been empirically observed in software development, software testing, and software maintenance. This paper examines whether this relationship also applies for software reuse. In order to understand the effects of component size and repository size on the benefits of a reuse program this paper extends an empirically grounded reuse model to assess the effects of component size on reuse savings. The study finds that a variant of the Goldilocks Principle applies with respect to both component and repository size, suggesting that uncontrolled growth of a reuse repository and an inappropriate choice of component size may reduce benefits obtained from reuse
Improving Open Source Software Maintenance
Maintenance is inevitable for almost any software. Software
maintenance is required to fix bugs, to add new features, to
improve performance, and/or to adapt to a changed environment.
In this article, we examine change in cognitive complexity and its
impacts on maintenance in the context of open source software
(OSS). Relationships of the change in cognitive complexity with
the change in the number of reported bugs, time taken to fix the
bugs, and contributions from new developers are examined and
are all found to be statistically significant. In addition, several
control variables, such as software size, age, development status,
and programmer skills are included in the analyses. The results
have strong implications for OSS project administrators; they
must continually measure software complexity and be actively
involved in managing it in order to have successful and sustainable
OSS products
An empirical study on predicting defect numbers
Abstract-Defect prediction is an important activity to make software testing processes more targeted and efficient. Many methods have been proposed to predict the defect-proneness of software components using supervised classification techniques in within-and cross-project scenarios. However, very few prior studies address the above issue from the perspective of predictive analytics. How to make an appropriate decision among different prediction approaches in a given scenario remains unclear. In this paper, we empirically investigate the feasibility of defect numbers prediction with typical regression models in different scenarios. The experiments on six open-source software projects in PROMISE repository show that the prediction model built with Decision Tree Regression seems to be the best estimator in both of the scenarios, and that for all the prediction models, the results yielded in the cross-project scenario can be comparable to (or sometimes better than) those in the within-project scenario when choosing suitable training data. Therefore, the findings provide a useful insight into defect numbers prediction for those new and inactive projects