23,046 research outputs found

    Challenging the Majority Rule in Matters of Truth

    Get PDF
    The majority rule has caught much attention in recent debate about the aggregation of judgments. But its role in finding the truth is limited. A majority of expert judgments is not necessarily authoritative, even if all experts are equally competent, if they make their judgments independently of each other, and if all the judgments are based on the same source of (good) evidence. In this paper I demonstrate this limitation by presenting a simple counterexample and a related general result. I pave the way for this argument by introducing a Bayesian model of evidence and expert judgment in order to give a precise account of the basic problem

    Boosting intelligence analysts’ judgment accuracy: what works, what fails?

    Get PDF
    A routine part of intelligence analysis is judging the probability of alternative hypotheses given available evidence. Intelligence organizations advise analysts to use intelligence-tradecraft methods such as Analysis of Competing Hypotheses (ACH) to improve judgment, but such methods have not been rigorously tested. We compared the evidence evaluation and judgment accuracy of a group of intelligence analysts who were recently trained in ACH and then used it on a probability judgment task to another group of analysts from the same cohort that were neither trained in ACH nor asked to use any specific method. Although the ACH group assessed information usefulness better than the control group, the control group was a little more accurate (and coherent) than the ACH group. Both groups, however, exhibited suboptimal judgment and were susceptible to unpacking effects. Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy. Specifically, mean absolute error (MAE) in analysts’ probability judgments decreased by 61% after first coherentizing their judgments (a process that ensures judgments respect the unitarity axiom) and then aggregating their judgments. The findings cast doubt on the efficacy of ACH, and show the promise of statistical methods for boosting judgment quality in intelligence and other organizations that routinely produce expert judgments

    The problem of evaluating automated large-scale evidence aggregators

    Get PDF
    In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy

    Learning to Rank Academic Experts in the DBLP Dataset

    Full text link
    Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people's activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches.Comment: Expert Systems, 2013. arXiv admin note: text overlap with arXiv:1302.041

    Finding Academic Experts on a MultiSensor Approach using Shannon's Entropy

    Full text link
    Expert finding is an information retrieval task concerned with the search for the most knowledgeable people, in some topic, with basis on documents describing peoples activities. The task involves taking a user query as input and returning a list of people sorted by their level of expertise regarding the user query. This paper introduces a novel approach for combining multiple estimators of expertise based on a multisensor data fusion framework together with the Dempster-Shafer theory of evidence and Shannon's entropy. More specifically, we defined three sensors which detect heterogeneous information derived from the textual contents, from the graph structure of the citation patterns for the community of experts, and from profile information about the academic experts. Given the evidences collected, each sensor may define different candidates as experts and consequently do not agree in a final ranking decision. To deal with these conflicts, we applied the Dempster-Shafer theory of evidence combined with Shannon's Entropy formula to fuse this information and come up with a more accurate and reliable final ranking list. Experiments made over two datasets of academic publications from the Computer Science domain attest for the adequacy of the proposed approach over the traditional state of the art approaches. We also made experiments against representative supervised state of the art algorithms. Results revealed that the proposed method achieved a similar performance when compared to these supervised techniques, confirming the capabilities of the proposed framework

    Solving multiple-criteria R&D project selection problems with a data-driven evidential reasoning rule

    Full text link
    In this paper, a likelihood based evidence acquisition approach is proposed to acquire evidence from experts'assessments as recorded in historical datasets. Then a data-driven evidential reasoning rule based model is introduced to R&D project selection process by combining multiple pieces of evidence with different weights and reliabilities. As a result, the total belief degrees and the overall performance can be generated for ranking and selecting projects. Finally, a case study on the R&D project selection for the National Science Foundation of China is conducted to show the effectiveness of the proposed model. The data-driven evidential reasoning rule based model for project evaluation and selection (1) utilizes experimental data to represent experts' assessments by using belief distributions over the set of final funding outcomes, and through this historic statistics it helps experts and applicants to understand the funding probability to a given assessment grade, (2) implies the mapping relationships between the evaluation grades and the final funding outcomes by using historical data, and (3) provides a way to make fair decisions by taking experts' reliabilities into account. In the data-driven evidential reasoning rule based model, experts play different roles in accordance with their reliabilities which are determined by their previous review track records, and the selection process is made interpretable and fairer. The newly proposed model reduces the time-consuming panel review work for both managers and experts, and significantly improves the efficiency and quality of project selection process. Although the model is demonstrated for project selection in the NSFC, it can be generalized to other funding agencies or industries.Comment: 20 pages, forthcoming in International Journal of Project Management (2019

    Examining Recent Expert Elicitation Judgment Guidelines: Value Assumptions and the Prospects for Rationality

    Get PDF
    This paper was presented at the VALDOR Symposium, Stockholm, June 1999. The author examines the value assumptions in the U.S. Department of Energy and Nuclear Regulatory Commission guidance on the use of expert judgment relating to high level nuclear waste disposal site selection
    • …
    corecore