3 research outputs found
On the feasibility of automated prediction of bug and non-bug issues
Context
Issue tracking systems are used to track and describe tasks in the development process, e.g., requested feature improvements or reported bugs. However, past research has shown that the reported issue types often do not match the description of the issue.
Objective
We want to understand the overall maturity of the state of the art of issue type prediction with the goal to predict if issues are bugs and evaluate if we can improve existing models by incorporating manually specified knowledge about issues.
Method
We train different models for the title and description of the issue to account for the difference in structure between these fields, e.g., the length. Moreover, we manually detect issues whose description contains a null pointer exception, as these are strong indicators that issues are bugs.
Results
Our approach performs best overall, but not significantly different from an approach from the literature based on the fastText classifier from Facebook AI Research. The small improvements in prediction performance are due to structural information about the issues we used. We found that using information about the content of issues in form of null pointer exceptions is not useful. We demonstrate the usefulness of issue type prediction through the example of labelling bugfixing commits.
Conclusions
Issue type prediction can be a useful tool if the use case allows either for a certain amount of missed bug reports or the prediction of too many issues as bug is acceptable
On the feasibility of automated prediction of bug and non-bug issues
Context: Issue tracking systems are used to track and describe tasks in the
development process, e.g., requested feature improvements or reported bugs.
However, past research has shown that the reported issue types often do not
match the description of the issue.
Objective: We want to understand the overall maturity of the state of the art
of issue type prediction with the goal to predict if issues are bugs and
evaluate if we can improve existing models by incorporating manually specified
knowledge about issues.
Method: We train different models for the title and description of the issue
to account for the difference in structure between these fields, e.g., the
length. Moreover, we manually detect issues whose description contains a null
pointer exception, as these are strong indicators that issues are bugs.
Results: Our approach performs best overall, but not significantly different
from an approach from the literature based on the fastText classifier from
Facebook AI Research. The small improvements in prediction performance are due
to structural information about the issues we used. We found that using
information about the content of issues in form of null pointer exceptions is
not useful. We demonstrate the usefulness of issue type prediction through the
example of labelling bugfixing commits.
Conclusions: Issue type prediction can be a useful tool if the use case
allows either for a certain amount of missed bug reports or the prediction of
too many issues as bug is acceptable