24,228 research outputs found

    Supporting Defect Causal Analysis in Practice with Cross-Company Data on Causes of Requirements Engineering Problems

    Full text link
    [Context] Defect Causal Analysis (DCA) represents an efficient practice to improve software processes. While knowledge on cause-effect relations is helpful to support DCA, collecting cause-effect data may require significant effort and time. [Goal] We propose and evaluate a new DCA approach that uses cross-company data to support the practical application of DCA. [Method] We collected cross-company data on causes of requirements engineering problems from 74 Brazilian organizations and built a Bayesian network. Our DCA approach uses the diagnostic inference of the Bayesian network to support DCA sessions. We evaluated our approach by applying a model for technology transfer to industry and conducted three consecutive evaluations: (i) in academia, (ii) with industry representatives of the Fraunhofer Project Center at UFBA, and (iii) in an industrial case study at the Brazilian National Development Bank (BNDES). [Results] We received positive feedback in all three evaluations and the cross-company data was considered helpful for determining main causes. [Conclusions] Our results strengthen our confidence in that supporting DCA with cross-company data is promising and should be further investigated.Comment: 10 pages, 8 figures, accepted for the 39th International Conference on Software Engineering (ICSE'17

    Software Metrics in Boa Large-Scale Software Mining Infrastructure: Challenges and Solutions

    Get PDF
    In this paper, we describe our experience implementing some of classic software engineering metrics using Boa - a large-scale software repository mining platform - and its dedicated language. We also aim to take an advantage of the Boa infrastructure to propose new software metrics and to characterize open source projects by software metrics to provide reference values of software metrics based on large number of open source projects. Presented software metrics, well known and proposed in this paper, can be used to build large-scale software defect prediction models. Additionally, we present the obstacles we met while developing metrics, and our analysis can be used to improve Boa in its future releases. The implemented metrics can also be used as a foundation for more complex explorations of open source projects and serve as a guide how to implement software metrics using Boa as the source code of the metrics is freely available to support reproducible research.Comment: Chapter 8 of the book "Software Engineering: Improving Practice through Research" (B. Hnatkowska and M. \'Smia{\l}ek, eds.), pp. 131-146, 201

    Connecting Software Metrics across Versions to Predict Defects

    Full text link
    Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modeling techniques. However, current widely used defect predictors such as code metrics and process metrics could not well describe how software modules change over the project evolution, which we believe is important for defect prediction. In order to deal with this problem, in this paper, we propose to use the Historical Version Sequence of Metrics (HVSM) in continuous software versions as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN), a popular modeling technique, to take HVSM as the input to build software prediction models. The experimental results show that, in most cases, the proposed HVSM-based RNN model has a significantly better effort-aware ranking effectiveness than the commonly used baseline models

    Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk

    Get PDF
    Background. Test resources are usually limited and therefore it is often not possible to completely test an application before a release. To cope with the problem of scarce resources, development teams can apply defect prediction to identify fault-prone code regions. However, defect prediction tends to low precision in cross-project prediction scenarios. Aims. We take an inverse view on defect prediction and aim to identify methods that can be deferred when testing because they contain hardly any faults due to their code being "trivial". We expect that characteristics of such methods might be project-independent, so that our approach could improve cross-project predictions. Method. We compute code metrics and apply association rule mining to create rules for identifying methods with low fault risk. We conduct an empirical study to assess our approach with six Java open-source projects containing precise fault data at the method level. Results. Our results show that inverse defect prediction can identify approx. 32-44% of the methods of a project to have a low fault risk; on average, they are about six times less likely to contain a fault than other methods. In cross-project predictions with larger, more diversified training sets, identified methods are even eleven times less likely to contain a fault. Conclusions. Inverse defect prediction supports the efficient allocation of test resources by identifying methods that can be treated with less priority in testing activities and is well applicable in cross-project prediction scenarios.Comment: Submitted to PeerJ C

    Simulation of radiation-induced defects

    Full text link
    Mainly due to their outstanding performance the position sensitive silicon detectors are widely used in the tracking systems of High Energy Physics experiments such as the ALICE, ATLAS, CMS and LHCb at LHC, the world's largest particle physics accelerator at CERN, Geneva. The foreseen upgrade of the LHC to its high luminosity (HL) phase (HL-LHC scheduled for 2023), will enable the use of maximal physics potential of the facility. After 10 years of operation the expected fluence will expose the tracking systems at HL-LHC to a radiation environment that is beyond the capacity of the present system design. Thus, for the required upgrade of the all-silicon central trackers extensive measurements and simulation studies for silicon sensors of different designs and materials with sufficient radiation tolerance have been initiated within the RD50 Collaboration. Supplementing measurements, simulations are in vital role for e.g. device structure optimization or predicting the electric fields and trapping in the silicon sensors. The main objective of the device simulations in the RD50 Collaboration is to develop an approach to model and predict the performance of the irradiated silicon detectors using professional software. The first successfully developed quantitative models for radiation damage, based on two effective midgap levels, are able to reproduce the experimentally observed detector characteristics like leakage current, full depletion voltage and charge collection efficiency (CCE). Recent implementations of additional traps at the SiO2_2/Si interface or close to it have expanded the scope of the experimentally agreeing simulations to such surface properties as the interstrip resistance and capacitance, and the position dependency of CCE for strip sensors irradiated up to ∼\sim1.5×10151.5\times10^{15} neqcm−2_{\textrm{eq}}\textrm{cm}^{-2}.Comment: 13 pages, 11 figures, 6 tables, 24th International Workshop on Vertex Detectors, 1-5 June 2015, Santa Fe, New Mexico, US
    • …
    corecore