93 research outputs found
Software Qualimetry at Schneider Electric: a field background
International audienceThis paper presents the Source Code Quality Indicators (SCQI) project led by the Strategy & Innovation corporate team to deploy Software Qualimetry within a large-scale multinational organization such as Schneider Electric (SE) 1. The related method (SCQI) was designed from a list of relevant use cases and relies on the main concepts of the SQALE [1] evaluation method. To support this method, SE has selected the SQuORE [2] platform thanks to its capability to allow large-scale deployment together with high versatility and adaptability to local needs. Feedback and lessons learned from initial deployments are now used to speed up the qualimetry process institutionalization within the whole company
Does This Code Change Affect Program Behavior? Identifying Nonbehavioral Changes with Bytecode
A. Maejima, Y. Higo, J. Matsumoto and S. Kusumoto, "Does This Code Change Affect Program Behavior? Identifying Nonbehavioral Changes with Bytecode," 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 2020, pp. 1103-1104, doi: 10.1109/COMPSAC48688.2020.0-119.2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC) [13-17 July 2020, Madrid, Spain
Should I Bug You? Identifying Domain Experts in Software Projects Using Code Complexity Metrics
In any sufficiently complex software system there are experts, having a
deeper understanding of parts of the system than others. However, it is not
always clear who these experts are and which particular parts of the system
they can provide help with. We propose a framework to elicit the expertise of
developers and recommend experts by analyzing complexity measures over time.
Furthermore, teams can detect those parts of the software for which currently
no, or only few experts exist and take preventive actions to keep the
collective code knowledge and ownership high. We employed the developed
approach at a medium-sized company. The results were evaluated with a survey,
comparing the perceived and the computed expertise of developers. We show that
aggregated code metrics can be used to identify experts for different software
components. The identified experts were rated as acceptable candidates by
developers in over 90% of all cases
a research program
Most process research relies heavily on the use of terms and concepts whose
validity depends on a variety of assumptions to be met. As it is difficult to
guarantee that they are met, such work continually runs the risk of being
invalid. We propose a different and complementary approach to understanding
process: Perform all description bottom-up and based on hard data alone. We
call the approach actual process and the data actual events. Actual events can
be measured automatically. This paper describes what has been done in this
area already and what are the core problems to be solved in the future
Revisiting Process versus Product Metrics: a Large Scale Analysis
Numerous methods can build predictive models from software data. However,
what methods and conclusions should we endorse as we move from analytics
in-the-small (dealing with a handful of projects) to analytics in-the-large
(dealing with hundreds of projects)?
To answer this question, we recheck prior small-scale results (about process
versus product metrics for defect prediction and the granularity of metrics)
using 722,471 commits from 700 Github projects. We find that some analytics
in-the-small conclusions still hold when scaling up to analytics in-the-large.
For example, like prior work, we see that process metrics are better predictors
for defects than product metrics (best process/product-based learners
respectively achieve recalls of 98\%/44\% and AUCs of 95\%/54\%, median
values).
That said, we warn that it is unwise to trust metric importance results from
analytics in-the-small studies since those change dramatically when moving to
analytics in-the-large. Also, when reasoning in-the-large about hundreds of
projects, it is better to use predictions from multiple models (since single
model predictions can become confused and exhibit a high variance).Comment: 36 pages, 12 figures and 5 table
Using Bug Reports as a Software Quality Measure
Bugzilla is an online software bug reporting system. It is widely used by both open-source software projects and commercial software companies and has become a major source to study software evolution, software project management, and software quality control. In some research studies, the number of bug reports has been used as an indicator of software quality. This paper examines this representation. We investigate whether the number of bug reports of a specific version of a software product is correlated with its quality. Our study is performed on six branches of three open-source software systems. Our results do not support using the number of bug reports as a quality indicator of a specific version of an evolving software product. Instead, the study reveals that the number of bug reports is in some ways correlated with the time duration between product releases. Finally, the paper suggests using accumulated bug reports as a means to represent the quality of a software branch
- …