427 research outputs found

    HITSnDIFFs: From Truth Discovery to Ability Discovery by Recovering Matrices with the Consecutive Ones Property

    Full text link
    We analyze a general problem in a crowd-sourced setting where one user asks a question (also called item) and other users return answers (also called labels) for this question. Different from existing crowd sourcing work which focuses on finding the most appropriate label for the question (the "truth"), our problem is to determine a ranking of the users based on their ability to answer questions. We call this problem "ability discovery" to emphasize the connection to and duality with the more well-studied problem of "truth discovery". To model items and their labels in a principled way, we draw upon Item Response Theory (IRT) which is the widely accepted theory behind standardized tests such as SAT and GRE. We start from an idealized setting where the relative performance of users is consistent across items and better users choose better fitting labels for each item. We posit that a principled algorithmic solution to our more general problem should solve this ideal setting correctly and observe that the response matrices in this setting obey the Consecutive Ones Property (C1P). While C1P is well understood algorithmically with various discrete algorithms, we devise a novel variant of the HITS algorithm which we call "HITSNDIFFS" (or HND), and prove that it can recover the ideal C1P-permutation in case it exists. Unlike fast combinatorial algorithms for finding the consecutive ones permutation (if it exists), HND also returns an ordering when such a permutation does not exist. Thus it provides a principled heuristic for our problem that is guaranteed to return the correct answer in the ideal setting. Our experiments show that HND produces user rankings with robustly high accuracy compared to state-of-the-art truth discovery methods. We also show that our novel variant of HITS scales better in the number of users than ABH, the only prior spectral C1P reconstruction algorithm.Comment: 22 pages, 14 figures, long version of of ICDE 2024 conference pape

    A Cyber Physical System Crowdsourcing Inference Method Based on Tempering: An Advancement in Artificial Intelligence Algorithms

    Get PDF
    Activity selection is critical for the smart environment and Cyber-Physical Systems (CPSs) that can provide timely and intelligent services, especially as the number of connected devices is increasing at an unprecedented speed. As it is important to collect labels by various agents in the CPSs, crowdsourcing inference algorithms are designed to help acquire accurate labels that involve high-level knowledge. However, there are some limitations in the algorithm in the existing literature such as incurring extra budget for the existing algorithms, inability to scale appropriately, requiring the knowledge of prior distribution, difficulties to implement these algorithms, or generating local optima. In this paper, we provide a crowdsourcing inference method with variational tempering that obtains ground truth as well as considers both the reliability of workers and the difficulty level of the tasks and ensure a local optimum. The numerical experiments of the real-world data indicate that our novel variational tempering inference algorithm performs better than the existing advancing algorithms. Therefore, this paper provides a new efficient algorithm in CPSs and machine learning, and thus, it makes a new contribution to the literature

    Increasing trust in new data sources: crowdsourcing image classification for ecology

    Full text link
    Crowdsourcing methods facilitate the production of scientific information by non-experts. This form of citizen science (CS) is becoming a key source of complementary data in many fields to inform data-driven decisions and study challenging problems. However, concerns about the validity of these data often constrain their utility. In this paper, we focus on the use of citizen science data in addressing complex challenges in environmental conservation. We consider this issue from three perspectives. First, we present a literature scan of papers that have employed Bayesian models with citizen science in ecology. Second, we compare several popular majority vote algorithms and introduce a Bayesian item response model that estimates and accounts for participants' abilities after adjusting for the difficulty of the images they have classified. The model also enables participants to be clustered into groups based on ability. Third, we apply the model in a case study involving the classification of corals from underwater images from the Great Barrier Reef, Australia. We show that the model achieved superior results in general and, for difficult tasks, a weighted consensus method that uses only groups of experts and experienced participants produced better performance measures. Moreover, we found that participants learn as they have more classification opportunities, which substantially increases their abilities over time. Overall, the paper demonstrates the feasibility of CS for answering complex and challenging ecological questions when these data are appropriately analysed. This serves as motivation for future work to increase the efficacy and trustworthiness of this emerging source of data.Comment: 25 pages, 10 figure

    On Actively Teaching the Crowd to Classify

    Get PDF
    Is it possible to teach workers while crowdsourcing classification tasks? Amongst the challenges: (a) workers have different (unknown) skills, competence, and learning rate to which the teaching must be adapted, (b) feedback on the workers’ progress is limited, (c) we may not have informative features for our data (otherwise crowdsourcing may be unnecessary). We propose a natural Bayesian model of the workers, modeling them as a learning entity with an initial skill, competence, and dynamics. We then show how a teaching system can exploit this model to interactively teach the workers. Our model uses feedback to adapt the teaching process to each worker, based on priors over hypotheses elicited from the crowd. Our experiments carried out on both simulated workers and real image annotation tasks on Amazon Mechanical Turk show the effectiveness of crowd-teaching systems
    • …
    corecore