13,373 research outputs found
Supporting Defect Causal Analysis in Practice with Cross-Company Data on Causes of Requirements Engineering Problems
[Context] Defect Causal Analysis (DCA) represents an efficient practice to
improve software processes. While knowledge on cause-effect relations is
helpful to support DCA, collecting cause-effect data may require significant
effort and time. [Goal] We propose and evaluate a new DCA approach that uses
cross-company data to support the practical application of DCA. [Method] We
collected cross-company data on causes of requirements engineering problems
from 74 Brazilian organizations and built a Bayesian network. Our DCA approach
uses the diagnostic inference of the Bayesian network to support DCA sessions.
We evaluated our approach by applying a model for technology transfer to
industry and conducted three consecutive evaluations: (i) in academia, (ii)
with industry representatives of the Fraunhofer Project Center at UFBA, and
(iii) in an industrial case study at the Brazilian National Development Bank
(BNDES). [Results] We received positive feedback in all three evaluations and
the cross-company data was considered helpful for determining main causes.
[Conclusions] Our results strengthen our confidence in that supporting DCA with
cross-company data is promising and should be further investigated.Comment: 10 pages, 8 figures, accepted for the 39th International Conference
on Software Engineering (ICSE'17
Recommended from our members
Assessing the Risk due to Software Faults: Estimates of Failure Rate versus Evidence of Perfection.
In the debate over the assessment of software reliability (or safety), as applied to critical software, two extreme positions can be discerned: the âstatisticalâ position, which requires that the claims of reliability be supported by statistical inference from realistic testing or operation, and the âperfectionistâ position, which requires convincing indications that the software is free from defects. These two positions naturally lead to requiring different kinds of supporting evidence, and actually to stating the dependability requirements in different ways, not allowing any direct comparison. There is often confusion about the relationship between statements about software failure rates and about software correctness, and about which evidence can support either kind of statement. This note clarifies the meaning of the two kinds of statement and how they relate to the probability of failure-free operation, and discusses their practical merits, especially for high required reliability or safety
- âŠ