8,540 research outputs found
Evaluating Process Quality Based on Change Request Data – An Empirical Study of the Eclipse Project
Abstract. The information routinely collected in change request management systems contains valuable information for monitoring of the process quality. However this data is currently utilized in a very limited way. This paper presents an empirical study of the process quality in the product portfolio of the Eclipse project. It is based on a systematic approach for the evaluation of process quality characteristics using change request data. Results of the study offer insights into the development process of Eclipse. Moreover the study allows assessing applicability and limitations of the proposed approach for the evaluation of process quality
Proactive Empirical Assessment of New Language Feature Adoption via Automated Refactoring: The Case of Java 8 Default Methods
Programming languages and platforms improve over time, sometimes resulting in
new language features that offer many benefits. However, despite these
benefits, developers may not always be willing to adopt them in their projects
for various reasons. In this paper, we describe an empirical study where we
assess the adoption of a particular new language feature. Studying how
developers use (or do not use) new language features is important in
programming language research and engineering because it gives designers
insight into the usability of the language to create meaning programs in that
language. This knowledge, in turn, can drive future innovations in the area.
Here, we explore Java 8 default methods, which allow interfaces to contain
(instance) method implementations.
Default methods can ease interface evolution, make certain ubiquitous design
patterns redundant, and improve both modularity and maintainability. A focus of
this work is to discover, through a scientific approach and a novel technique,
situations where developers found these constructs useful and where they did
not, and the reasons for each. Although several studies center around assessing
new language features, to the best of our knowledge, this kind of construct has
not been previously considered.
Despite their benefits, we found that developers did not adopt default
methods in all situations. Our study consisted of submitting pull requests
introducing the language feature to 19 real-world, open source Java projects
without altering original program semantics. This novel assessment technique is
proactive in that the adoption was driven by an automatic refactoring approach
rather than waiting for developers to discover and integrate the feature
themselves. In this way, we set forth best practices and patterns of using the
language feature effectively earlier rather than later and are able to possibly
guide (near) future language evolution. We foresee this technique to be useful
in assessing other new language features, design patterns, and other
programming idioms
The effects of change decomposition on code review -- a controlled experiment
Background: Code review is a cognitively demanding and time-consuming
process. Previous qualitative studies hinted at how decomposing change sets
into multiple yet internally coherent ones would improve the reviewing process.
So far, literature provided no quantitative analysis of this hypothesis.
Aims: (1) Quantitatively measure the effects of change decomposition on the
outcome of code review (in terms of number of found defects, wrongly reported
issues, suggested improvements, time, and understanding); (2) Qualitatively
analyze how subjects approach the review and navigate the code, building
knowledge and addressing existing issues, in large vs. decomposed changes.
Method: Controlled experiment using the pull-based development model
involving 28 software developers among professionals and graduate students.
Results: Change decomposition leads to fewer wrongly reported issues,
influences how subjects approach and conduct the review activity (by increasing
context-seeking), yet impacts neither understanding the change rationale nor
the number of found defects.
Conclusions: Change decomposition reduces the noise for subsequent data
analyses but also significantly supports the tasks of the developers in charge
of reviewing the changes. As such, commits belonging to different concepts
should be separated, adopting this as a best practice in software engineering
SZZ Unleashed: An Open Implementation of the SZZ Algorithm -- Featuring Example Usage in a Study of Just-in-Time Bug Prediction for the Jenkins Project
Numerous empirical software engineering studies rely on detailed information
about bugs. While issue trackers often contain information about when bugs were
fixed, details about when they were introduced to the system are often absent.
As a remedy, researchers often rely on the SZZ algorithm as a heuristic
approach to identify bug-introducing software changes. Unfortunately, as
reported in a recent systematic literature review, few researchers have made
their SZZ implementations publicly available. Consequently, there is a risk
that research effort is wasted as new projects based on SZZ output need to
initially reimplement the approach. Furthermore, there is a risk that newly
developed (closed source) SZZ implementations have not been properly tested,
thus conducting research based on their output might introduce threats to
validity. We present SZZ Unleashed, an open implementation of the SZZ algorithm
for git repositories. This paper describes our implementation along with a
usage example for the Jenkins project, and conclude with an illustrative study
on just-in-time bug prediction. We hope to continue evolving SZZ Unleashed on
GitHub, and warmly invite the community to contribute
What is the Connection Between Issues, Bugs, and Enhancements? (Lessons Learned from 800+ Software Projects)
Agile teams juggle multiple tasks so professionals are often assigned to
multiple projects, especially in service organizations that monitor and
maintain a large suite of software for a large user base. If we could predict
changes in project conditions changes, then managers could better adjust the
staff allocated to those projects.This paper builds such a predictor using data
from 832 open source and proprietary applications. Using a time series analysis
of the last 4 months of issues, we can forecast how many bug reports and
enhancement requests will be generated next month. The forecasts made in this
way only require a frequency count of this issue reports (and do not require an
historical record of bugs found in the project). That is, this kind of
predictive model is very easy to deploy within a project. We hence strongly
recommend this method for forecasting future issues, enhancements, and bugs in
a project.Comment: Accepted to 2018 International Conference on Software Engineering, at
the software engineering in practice track. 10 pages, 10 figure
Supporting Modern Code Review
Modern code review is a lightweight and asynchronous process of auditing code changes that is done by a reviewer other than the author of the changes. Code review is widely used in both open source and industrial projects because of its diverse benefits, which include defect identification, code improvement, and knowledge transfer. This thesis presents three research results on code review. First, we conduct a large-scale developer survey. The objective of the survey is to understand how developers conduct code review and what difficulties they face in the process. We also reproduce the survey questions from the previous studies to broaden the base of empirical knowledge in the code review research community. Second, we investigate in depth the coding conventions applied during code review. These coding conventions guide developers to write source code in a consistent format. We determine how many coding convention violations are introduced, removed, and addressed, based on comments left by reviewers. The results show that developers put a great deal of effort into checking for convention violations, although various convention checking tools are available. Third, we propose a technique that automatically recommends related code review requests. When a new patch is submitted for code review, our technique recommends previous code review requests that contain a patch similar to the new one. Developers can locate meaningful information and development context from our recommendations. With two empirical studies and an automation technique for recommending related code reviews, this thesis broadens the empirical knowledge base for code review practitioners and provides a useful approach that supports developers in streamlining their review efforts
Locating bugs without looking back
Bug localisation is a core program comprehension task in software maintenance: given the observation of a bug, e.g. via a bug report, where is it located in the source code? Information retrieval (IR) approaches see the bug report as the query, and the source code files as the documents to be retrieved, ranked by relevance. Such approaches have the advantage of not requiring expensive static or dynamic analysis of the code. However, current state-of-the-art IR approaches rely on project history, in particular previously fixed bugs or previous versions of the source code. We present a novel approach that directly scores each current file against the given report, thus not requiring past code and reports. The scoring method is based on heuristics identified through manual inspection of a small sample of bug reports. We compare our approach to eight others, using their own five metrics on their own six open source projects. Out of 30 performance indicators, we improve 27 and equal 2. Over the projects analysed, on average we find one or more affected files in the top 10 ranked files for 76% of the bug reports. These results show the applicability of our approach to software projects without history
- …