1 research outputs found

    Testing the stability of “wisdom of crowds” judgments of search results over time and their similarity with the search engine rankings

    Get PDF
    PURPOSE: One of the under-explored aspects in the process of user information seeking behaviour is influence of time on relevance evaluation. It has been shown in previous studies that individual users might change their assessment of search results over time. It is also known that aggregated judgments of multiple individual users can lead to correct and reliable decisions; this phenomenon is known as the “wisdom of crowds”. The aim of this study is to examine whether aggregated judgments will be more stable and thus more reliable over time than individual user judgments. DESIGN/METHODS: In this study two simple measures are proposed to calculate the aggregated judgments of search results and compare their reliability and stability to individual user judgments. In addition, the aggregated “wisdom of crowds” judgments were used as a means to compare the differences between human assessments of search results and search engine’s rankings. A large-scale user study was conducted with 87 participants who evaluated two different queries and four diverse result sets twice, with an interval of two months. Two types of judgments were considered in this study: 1) relevance on a 4-point scale, and 2) ranking on a 10-point scale without ties. FINDINGS: It was found that aggregated judgments are much more stable than individual user judgments, yet they are quite different from search engine rankings. Practical implications: The proposed “wisdom of crowds” based approach provides a reliable reference point for the evaluation of search engines. This is also important for exploring the need of personalization and adapting search engine’s ranking over time to changes in users preferences. ORIGINALITY/VALUE: This is a first study that applies the notion of “wisdom of crowds” to examine the under-explored phenomenon in the literature of “change in time” in user evaluation of relevance
    corecore