8,073 research outputs found

    Decision Making for Inconsistent Expert Judgments Using Negative Probabilities

    Full text link
    In this paper we provide a simple random-variable example of inconsistent information, and analyze it using three different approaches: Bayesian, quantum-like, and negative probabilities. We then show that, at least for this particular example, both the Bayesian and the quantum-like approaches have less normative power than the negative probabilities one.Comment: 14 pages, revised version to appear in the Proceedings of the QI2013 (Quantum Interactions) conferenc

    Arguments from Expert Opinion and Persistent Bias

    Get PDF
    Accounts of arguments from expert opinion take it for granted that expert judgments are reliable, and so an argument that proceeds from premises about what an expert judges to a conclusion that the expert is probably right is a strong argument. In my (2013), I considered a potential justification for this assumption, namely, that expert judgments are more likely to be true than novice judgments, and discussed empirical evidence suggesting that expert judgments are not more reliable than novice judgments or even chance. In this paper, I consider another potential justification for this assumption, namely, that expert judgments are not influenced by the kinds of cognitive biases novice judgments are influenced by, and discuss empirical evidence suggesting that experts are vulnerable to pretty much the same kinds of cognitive biases as novices. If this is correct, then the basic assumption at the core of accounts of arguments from expert opinion remains unjustified

    Modeling Expert Judgments of Insider Threat Using Ontology Structure: Effects of Individual Indicator Threat Value and Class Membership

    Get PDF
    We describe research on a comprehensive ontology of sociotechnical and organizational factors for insider threat (SOFIT) and results of an expert knowledge elicitation study. The study examined how alternative insider threat assessment models may reflect associations among constructs beyond the relationships defined in the hierarchical class structure. Results clearly indicate that individual indicators contribute differentially to expert judgments of insider threat risk. Further, models based on ontology class structure more accurately predict expert judgments. There is some (although weak) empirical evidence that other associations among constructs—such as the roles that indicators play in an insider threat exploit—may also contribute to expert judgments of insider threat risk. These findings contribute to ongoing research aimed at development of more effective insider threat decision support tools

    Applicability of the technology acceptance model for widget-based personal learning environments

    Get PDF
    This contribution presents results from two exploratory studies on technology acceptance and use of widget-based personal learning environments. Methodologically, the investigation carried out applies the unified theory of acceptance and use of technology (UTAUT). With the help of this instrument, the study assesses expert judgments about intentions to use and actual use of the emerging technology of flexibly arranged combinations of use-case-sized mini learning tools. This study aims to explore the applicability of the UTAUT model and questionnaire for widget-based personal learning environments and reports back on the experiences gained with the two studies

    Can experts judge elections? Testing the validity of expert judgments for measuring election integrity

    Get PDF
    Expert surveys have been used to measure a wide variety of phenomena in political science, ranging from party positions, to corruption, to the quality of democracy and elections. However, expert judgments raise important validity concerns, both about the object being measured as well as the experts. It is argued in this article that the context of evaluation is also important to consider when assessing the validity of expert surveys. This is even more important for expert surveys with a comprehensive, worldwide scope, such as democracy or corruption indices. This article tests the validity of expert judgments about election integrity – a topic of increasing concern to both the international community and academics. Evaluating expert judgments of election integrity provides an important contribution to the literature evaluating the validity of expert surveys as instruments of measurement as: (1) the object under study is particularly complex to define and multifaceted; and (2) election integrity is measured in widely varying institutional contexts, ranging from electoral autocracies to liberal democracies. Three potential sources of bias are analysed (the object, the experts and the context), using a unique new dataset on election integrity entitled the ‘Perceptions of Electoral Integrity’ dataset. The data include over 800 experts in 66 parliamentary and presidential elections worldwide. It is found that validity of expert judgments about election integrity is increased if experts are asked to provide factual information (rather than evaluative judgments), and if they are asked to evaluate election day (rather than pre-election) integrity. It is also found that ideologically polarised elections and elections of lower integrity increase expert disagreement about election integrity. The article concludes with suggestions for researchers using the expert survey data on election integrity on how to check the validity of their data and adjust their analyses accordingly, and outlines some remaining challenges for future data collection using expert surveys

    Оценивание многофакторных рисков в условиях концептуальной неопределенности

    No full text
    Запропоновано інструментарій для оцінювання багатофакторних ризиків в процесі функціонування інноваційної системи технологічного передбачення. Розроблено модифіковану методику BOCR методу аналізу ієрархій (МАІ), яка дозволяє: інтегрувати оцінювання ризику непрогнозованих ситуацій та форс-мажорного ризику в загальну структуру прийняття рішень за допомогою МАІ поряд з оцінюванням факторів доходів, витрат і можливостей кожного альтернативного варіанту рішень обробляти експертні оцінки у вигляді нечітких відношень переваг враховувати часовий параметр, коли фактори і альтернативи рішень можуть коригуватися чи принципово змінюватися на протязі деякого часового інтервалу. Розроблено систему показників оцінювання ризику суб’єктивності експертної інформації (інформаційний ризик) при різних варіантах формування експертних оцінок.A tool is proposed to evaluate multi-factor risks during the operation of a complex innovation system of technological forecast. A modified BOCR method of analytic hierarchy process (AHP) is developed. It allows: integration of situation and force majeur risk evaluation in overall structure of decision making with the help of AHP along with the evaluation of benefits, costs and opportunities for each alternative; processing expert judgments in the form of fuzzy preference relations; taking into account a time parameter, when decision factors and alternatives may be corrected or principally changed during some time period. Indices of risk of subjective judgments (information risk) evaluation are developed for given point, interval, and fuzzy expert judgments and probability distribution of expert judgments

    Challenging the Majority Rule in Matters of Truth

    Get PDF
    The majority rule has caught much attention in recent debate about the aggregation of judgments. But its role in finding the truth is limited. A majority of expert judgments is not necessarily authoritative, even if all experts are equally competent, if they make their judgments independently of each other, and if all the judgments are based on the same source of (good) evidence. In this paper I demonstrate this limitation by presenting a simple counterexample and a related general result. I pave the way for this argument by introducing a Bayesian model of evidence and expert judgment in order to give a precise account of the basic problem

    Project portfolio resource risk assessment considering project interdependency by the fuzzy Bayesian network

    Get PDF
    Resource risk caused by specific resource sharing or competition among projects due to resource constraints is a major issue in project portfolio management, which challenges the application of risk analysis methods effectively. This paper presents a methodology by using a fuzzy Bayesian network to assess the project portfolio resource risk, determine critical resource risk factors, and propose risk-reduction strategies. In this method, the project portfolio resource risk factors are first identified by taking project interdependency into consideration, and then the Bayesian network model is developed to analyze the risk level of the identified risk factors in which expert judgments and fuzzy set theory are integrated to determine the probabilities of all risk factors to deal with incomplete risk data and information. To reduce the subjectivity of expert judgments, the expert weights are determined by combining experts’ background and reliability degree of expert judgments. A numerical analysis is used to demonstrate the application of the proposed methodology. The results show that project portfolio resource risks can be analyzed effectively and efficiently. Furthermore, “poor communication and cooperation among projects,” “capital difficulty,” and “lack of sharing technology among projects” are considered the leading factors of the project portfolio resource risk. Risk-reduction strategic decisions based on the results of risk assessment can be made, which provide project managers with a useful method or tool to manage project risks

    Ranking of library and information science researchers: Comparison of data sources for correlating citation data, and expert judgments

    Get PDF
    This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA. (C) 2010 Elsevier Ltd. All rights reserved
    corecore