425 research outputs found

    The role of causal reasoning in understanding Simpson's paradox, Lord's paradox, and the suppression effect: covariate selection in the analysis of observational studies

    Get PDF
    Tu et al present an analysis of the equivalence of three paradoxes, namely, Simpson's, Lord's, and the suppression phenomena. They conclude that all three simply reiterate the occurrence of a change in the association of any two variables when a third variable is statistically controlled for. This is not surprising because reversal or change in magnitude is common in conditional analysis. At the heart of the phenomenon of change in magnitude, with or without reversal of effect estimate, is the question of which to use: the unadjusted (combined table) or adjusted (sub-table) estimate. Hence, Simpson's paradox and related phenomena are a problem of covariate selection and adjustment (when to adjust or not) in the causal analysis of non-experimental data. It cannot be overemphasized that although these paradoxes reveal the perils of using statistical criteria to guide causal analysis, they hold neither the explanations of the phenomenon they depict nor the pointers on how to avoid them. The explanations and solutions lie in causal reasoning which relies on background knowledge, not statistical criteria

    Exploring Causal Influences

    Get PDF
    Recent data mining techniques exploit patterns of statistical independence in multivariate data to make conjectures about cause/effect relationships. These relationships can be used to construct causal graphs, which are sometimes represented by weighted node-link diagrams, with nodes representing variables and combinations of weighted links and/or nodes showing the strength of causal relationships. We present an interactive visualization for causal graphs (ICGs), inspired in part by the Influence Explorer. The key principles of this visualization are as follows: Variables are represented with vertical bars attached to nodes in a graph. Direct manipulation of variables is achieved by sliding a variable value up and down, which reveals causality by producing instantaneous change in causally and/or probabilistically linked variables. This direct manipulation technique gives users the impression they are causally influencing the variables linked to the one they are manipulating. In this context, we demonstrate the subtle distinction between seeing and setting of variable values, and in an extended example, show how this visualization can help a user understand the relationships in a large variable set, and with some intuitions about the domain and a few basic concepts, quickly detect bugs in causal models constructed from these data mining techniques

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective

    The illusion of data validity : Why numbers about people are likely wrong

    Get PDF
    This reflection article addresses a difficulty faced by scholars and practitioners working with numbers about people, which is that those who study people want numerical data about these people. Unfortunately, time and time again, this numerical data about people is wrong. Addressing the potential causes of this wrongness, we present examples of analyzing people numbers, i.e., numbers derived from digital data by or about people, and discuss the comforting illusion of data validity. We first lay a foundation by highlighting potential inaccuracies in collecting people data, such as selection bias. Then, we discuss inaccuracies in analyzing people data, such as the flaw of averages, followed by a discussion of errors that are made when trying to make sense of people data through techniques such as posterior labeling. Finally, we discuss a root cause of people data often being wrong – the conceptual conundrum of thinking the numbers are counts when they are actually measures. Practical solutions to address this illusion of data validity are proposed. The implications for theories derived from people data are also highlighted, namely that these people theories are generally wrong as they are often derived from people numbers that are wrong.© 2022 Wuhan University. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).fi=vertaisarvioitu|en=peerReviewed

    BIG DATA A BIG DEAL FOR CAPITAL MARKET COMPANIES IN THEIR TRANSFORMATION PROCESS?

    Get PDF
    Abstract The following article discusses the importance of using big data, especially in the operation of capital market companies, both in terms of benefits and potential risks. Given the growing dynamic business environment, capital market companies have to transform their operations in order to accommodate the raising demands. Fast business decision making is of particular importance in this process. Structured use of data plays a major role in decision-making, especially as the amount of large digital data in the modern world grows at an unprecedented rate. Author of the article focuses on the statistical and econometric techniques required for the analysis of big data. The article also highlights some use cases and the growing interest of capital market companies in introducing big data analytical technologies and the relevant challenges and benefits. In addition, using so-called "Simpson’s Reversal Paradox" author explains that using big data and digging deep into details might be counterproductive and lead to loss of global picture and wrong decision-making

    Book Reviews

    Get PDF
    Suffering and Evil in the Plays of Christopher Marlowe (Douglas Cole) Jacobean Tragedy: The Quest for Moral Order (Irving Ribner) (Reviewed by Waldo F. McNeir, University of Oregon) John Donne\u27s Lyrics: The Eloquence of Action (Arnold Stein) (Reviewed by Joseph H. Summers, Washington University, St. Louis) John Dryden\u27s Imagery (Arthur W. Hoffman) (Reviewed by Arthur Sherbo, Michigan State University) Aspects of American Poetry (ed. Richard M. Ludwig) (Reviewed by R. K. Meiners, Arizona State University) Conrad Aiken (Frederick J. Hoffman) Conrad Aiken: A Life of His Art (Jay Martin) (Reviewed by Douglas Robillard, Georgia Institute of Technology

    Survey of Trustworthy AI: A Meta Decision of AI

    Get PDF
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective.Cloud-based Computational Decision, Artificial Intelligence, Machine Learning9. Industry, innovation and infrastructur
    • …
    corecore