12,908 research outputs found

    Incorporating Clicks, Attention and Satisfaction into a Search Engine Result Page Evaluation Model

    Get PDF
    Modern search engine result pages often provide immediate value to users and organize information in such a way that it is easy to navigate. The core ranking function contributes to this and so do result snippets, smart organization of result blocks and extensive use of one-box answers or side panels. While they are useful to the user and help search engines to stand out, such features present two big challenges for evaluation. First, the presence of such elements on a search engine result page (SERP) may lead to the absence of clicks, which is, however, not related to dissatisfaction, so-called "good abandonments." Second, the non-linear layout and visual difference of SERP items may lead to non-trivial patterns of user attention, which is not captured by existing evaluation metrics. In this paper we propose a model of user behavior on a SERP that jointly captures click behavior, user attention and satisfaction, the CAS model, and demonstrate that it gives more accurate predictions of user actions and self-reported satisfaction than existing models based on clicks alone. We use the CAS model to build a novel evaluation metric that can be applied to non-linear SERP layouts and that can account for the utility that users obtain directly on a SERP. We demonstrate that this metric shows better agreement with user-reported satisfaction than conventional evaluation metrics.Comment: CIKM2016, Proceedings of the 25th ACM International Conference on Information and Knowledge Management. 201

    Crowdsourcing Paper Screening in Systematic Literature Reviews

    Full text link
    Literature reviews allow scientists to stand on the shoulders of giants, showing promising directions, summarizing progress, and pointing out existing challenges in research. At the same time conducting a systematic literature review is a laborious and consequently expensive process. In the last decade, there have a few studies on crowdsourcing in literature reviews. This paper explores the feasibility of crowdsourcing for facilitating the literature review process in terms of results, time and effort, as well as to identify which crowdsourcing strategies provide the best results based on the budget available. In particular we focus on the screening phase of the literature review process and we contribute and assess methods for identifying the size of tests, labels required per paper, and classification functions as well as methods to split the crowdsourcing process in phases to improve results. Finally, we present our findings based on experiments run on Crowdflower

    Living Innovation Laboratory Model Design and Implementation

    Full text link
    Living Innovation Laboratory (LIL) is an open and recyclable way for multidisciplinary researchers to remote control resources and co-develop user centered projects. In the past few years, there were several papers about LIL published and trying to discuss and define the model and architecture of LIL. People all acknowledge about the three characteristics of LIL: user centered, co-creation, and context aware, which make it distinguished from test platform and other innovation approaches. Its existing model consists of five phases: initialization, preparation, formation, development, and evaluation. Goal Net is a goal-oriented methodology to formularize a progress. In this thesis, Goal Net is adopted to subtract a detailed and systemic methodology for LIL. LIL Goal Net Model breaks the five phases of LIL into more detailed steps. Big data, crowd sourcing, crowd funding and crowd testing take place in suitable steps to realize UUI, MCC and PCA throughout the innovation process in LIL 2.0. It would become a guideline for any company or organization to develop a project in the form of an LIL 2.0 project. To prove the feasibility of LIL Goal Net Model, it was applied to two real cases. One project is a Kinect game and the other one is an Internet product. They were both transformed to LIL 2.0 successfully, based on LIL goal net based methodology. The two projects were evaluated by phenomenography, which was a qualitative research method to study human experiences and their relations in hope of finding the better way to improve human experiences. Through phenomenographic study, the positive evaluation results showed that the new generation of LIL had more advantages in terms of effectiveness and efficiency.Comment: This is a book draf

    Components and Functions of Crowdsourcing Systems – A Systematic Literature Review

    Get PDF
    Many organizations are now starting to introduce crowdsourcing as a new model of business to outsource tasks, which are traditionally performed by a small group of people, to an undefined large workforce. While the utilization of crowdsourcing offers a lot of advantages, the development of the required system carries some risks, which are reduced by establishing a profound theoretical foundation. Thus, this article strives to gain a better understanding of what crowdsourcing systems are and what typical design aspects are considered in the development of such systems. In this paper, the author conducted a systematic literature review in the domain of crowdsourcing systems. As a result, 17 definitions of crowdsourcing systems were found and categorized into four perspectives: the organizational, the technical, the functional, and the human-centric. In the second part of the results, the author derived and presented components and functions that are implemented in a crowdsourcing system

    Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems

    Get PDF
    Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous "information piece-workers", have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g. majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and low-rank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms which can dynamically assign tasks. By adaptively deciding which questions to ask to the next arriving worker, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and non-adaptive scenarios. Hence, our non-adaptive approach is order-optimal under both scenarios. This strongly relies on the fact that workers are fleeting and can not be exploited. Therefore, architecturally, our results suggest that building a reliable worker-reputation system is essential to fully harnessing the potential of adaptive designs.Comment: 38 pages, 4 figur
    • …
    corecore