1 research outputs found
Towards Meaningful Statements in IR Evaluation. Mapping Evaluation Measures to Interval Scales
Recently, it was shown that most popular IR measures are not interval-scaled,
implying that decades of experimental IR research used potentially improper
methods, which may have produced questionable results. However, it was unclear
if and to what extent these findings apply to actual evaluations and this
opened a debate in the community with researchers standing on opposite
positions about whether this should be considered an issue (or not) and to what
extent.
In this paper, we first give an introduction to the representational
measurement theory explaining why certain operations and significance tests are
permissible only with scales of a certain level. For that, we introduce the
notion of meaningfulness specifying the conditions under which the truth (or
falsity) of a statement is invariant under permissible transformations of a
scale. Furthermore, we show how the recall base and the length of the run may
make comparison and aggregation across topics problematic.
Then we propose a straightforward and powerful approach for turning an
evaluation measure into an interval scale, and describe an experimental
evaluation of the differences between using the original measures and the
interval-scaled ones.
For all the regarded measures - namely Precision, Recall, Average Precision,
(Normalized) Discounted Cumulative Gain, Rank-Biased Precision and Reciprocal
Rank - we observe substantial effects, both on the order of average values and
on the outcome of significance tests. For the latter, previously significant
differences turn out to be insignificant, while insignificant ones become
significant. The effect varies remarkably between the tests considered but
overall, on average, we observed a 25% change in the decision about which
systems are significantly different and which are not