2 research outputs found
Is it Possible to Disregard Obsolete Requirements? A Family of Experiments in Software Effort Estimation
Context: Expert judgement is a common method for software effort estimations
in practice today. Estimators are often shown extra obsolete requirements
together with the real ones to be implemented. Only one previous study has been
conducted on if such practices bias the estimations. Objective: We conducted
six experiments with both students and practitioners to study, and quantify,
the effects of obsolete requirements on software estimation. Method By
conducting a family of six experiments using both students and practitioners as
research subjects (N = 461), and by using a Bayesian Data Analysis approach, we
investigated different aspects of this effect. We also argue for, and show an
example of, how we by using a Bayesian approach can be more confident in our
results and enable further studies with small sample sizes. Results: We found
that the presence of obsolete requirements triggered an overestimation in
effort across all experiments. The effect, however, was smaller in a field
setting compared to using students as subjects. Still, the over-estimations
triggered by the obsolete requirements were systematically around twice the
percentage of the included obsolete ones, but with a large 95% credible
interval. Conclusions: The results have implications for both research and
practice in that the found systematic error should be accounted for in both
studies on software estimation and, maybe more importantly, in estimation
practices to avoid over-estimation due to this systematic error. We partly
explain this error to be stemming from the cognitive bias of
anchoring-and-adjustment, i.e. the obsolete requirements anchored a much larger
software. However, further studies are needed in order to accurately predict
this effect