14 research outputs found

    Evidence Amalgamation in the Sciences: An Introduction

    Get PDF
    Amalgamating evidence from heterogeneous sources and across levels of inquiry is becoming increasingly important in many pure and applied sciences. This special issue provides a forum for researchers from diverse scientific and philosophical perspectives to discuss evidence amalgamation, its methodologies, its history, its pitfalls and its potential. We situate the contributions therein within six themes from the broad literature on this subject: the variety-of-evidence thesis, the philosophy of meta-analysis, the role of robustness/sensitivity analysis for evidence amalgamation, its bearing on questions of extrapolation and external validity of experiments, its connection with theory development, and its interface with causal inference, especially regarding causal theories of cancer

    Varieties of Error and Varieties of Evidence in Scientific Inference, Forthcoming in The British Journal for Philosophy of Science

    Get PDF
    According to the Variety of Evidence Thesis items of evidence from independent lines of investigation are more confirmatory, ceteris paribus, than e.g. replications of analogous studies. This thesis is known to fail Bovens and Hartmann (2003), Claveau (2013). How- ever, the results obtained by the former only concern instruments whose evidence is either fully random or perfectly reliable; instead in Claveau (2013), unreliability is modelled as deterministic bias. In both cases, the unreliable instrument delivers totally irrelevant information. We present a model which formalises both reliability, and unreliability, differently. Our instruments are either reliable, but affected by random error, or they are biased but not deterministically so. Bovens and Hartmann’s results are counter-intuitive in that in their model a long series of consistent reports from the same instrument does not raise suspicion of “too-good-to- be-true” evidence. This happens precisely because they neither contemplate the role of systematic bias, nor unavoidable random error of reliable instruments. In our model the Variety of Evidence Thesis fails as well, but the area of failure is considerably smaller than for Bovens and Hartmann (2003), Claveau (2013) and holds for (the majority of) realistic cases (that is, where biased instruments are very biased). The essential mechanism which triggers VET failure is the rate of false to true positives for the two kinds of instruments. Our emphasis is on modelling beliefs about sources of knowledge and their role in hypothesis confirmation in interaction with dimensions of evidence, such as variety and consistency

    Determining Maximal Entropy Functions for Objective Bayesian Inductive Logic

    Get PDF
    According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in entropy and show that, if the set of probability functions satisfying the premisses contains a limit in entropy, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are categorical sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function

    Pharmacovigilance as Personalized Medicine. In: Chiara Beneduce and Marta Bertolaso (eds.) Personalized Medicine: A Multidisciplinary Approach to Complexity, Springer Nature.

    Get PDF
    Personalized medicine relies on two points: 1) causal knowledge about the possible effects of X in a given statistical population; 2) assignment of the given individual to a suitable reference class. Regarding point 1, standard approaches to causal inference are generally considered to be characterized by a trade-off between how confidently one can establish causality in any given study (internal validity) and extrapolating such knowledge to specific target groups (external validity). Regarding point 2, it is uncertain which reference class leads to the most reliable inferences. Instead, pharmacovigilance focuses on both elements of the individual prediction at the same time, that is, the establishment of the possible causal link between a given drug and an observed adverse event, and the identification of possible subgroups, where such links may arise. We develop an epistemic framework that exploits the joint contribution of different dimensions of evidence and allows one to deal with the reference class problem not only by relying on statistical data about covariances, but also by drawing on causal knowledge. That is, the probability that a given individual will face a given side effect, will probabilistically depend on his characteristics and the plausible causal models in which such features become relevant. The evaluation of the causal models is grounded on the available evidence and theory

    Strictly Proper Scoring Rules

    Get PDF
    Epistemic scoring rules are the en vogue tool for justifications of the probability norm and further norms of rational belief formation. They are different in kind and application from statistical scoring rules from which they arose. In the first part of the paper I argue that statistical scoring rules, properly understood, are in principle better suited to justify the probability norm than their epistemic brethren. Furthermore, I give a justification of the probability norm applying statistical scoring rules. In the second part of the paper I give a variety of justifications of norms for rational belief formation employing statistical scoring rules. Furthermore, general properties of statistical scoring rules are investigated. Epistemic scoring rules feature as a useful technical tool for constructing statistical scoring rules

    The principle of spectrum exchangeability within inductive logic

    Full text link
    We investigate the consequences of the principle of Spectrum Exchangeability m inductive logic over the polyadic fragment of first order logic. This principle roughly states that the probability of a possible world should only depend on how the inhabitants of this world behave with respect to indistinguishability. This principle is a natural generahzation of exchangeability principles that have long been investigated over the monadic predicate fragment of first order logic. It is grounded in our deep conviction that in the state of total ignorance all possible worlds that can be obtained from each other by basic symmetric transformations should have the same a priori probability.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Epistemology of Causal Inference in Pharmacology: Towards a Framework for the Assessment of Harms

    Get PDF
    Philosophical discussions on causal inference in medicine are stuck in dyadic camps, each defending one kind of evidence or method rather than another as best support for causal hypotheses. Whereas Evidence Based Medicine advocates invoke the use of Randomised Controlled Trials and systematic reviews of RCTs as gold standard, philosophers of science emphasise the importance of mechanisms and their distinctive informational contribution to causal inference and assessment. Some have suggested the adoption of a pluralistic approach to causal inference, and an inductive rather than hypothetico-deductive inferential paradigm. However, these proposals deliver no clear guidelines about how such plurality of evidence sources should jointly justify hypotheses of causal associations. In this paper, we develop the pluralistic approach along Hill's (1965) famous criteria for discerning causal associations by employing Bovens' and Hartmann's general Bayes net reconstruction of scientific inference to model the assessment of harms in an evidence-amalgamation framework

    Conflict of Interest and the Principle of Total Evidence

    Get PDF
    Many clinical trials suffer from conflicts of interest, such as sponsorship by pharmaceutical companies. “Meta-research” evidence suggests that conflicts of interest raise the probability of biased estimates. And yet, the very same trials are prima facie more reliable than trials not subject conflict of interest, in virtue of their better design. How should one deal with this seemingly conflicting information? In this paper, we propose a Bayesian model to elucidate the bearing of meta-research evidence on the hypothesis of interest

    Language Invariance and Spectrum Exchangeability in Inductive Logic

    Full text link
    A sufficient condition is given for a probability function in Inductive Logic (with relations of all arities) satisfying spectrum exchangeability to addition- ally satisfy Language Invariance. This condition is shown to also be necessary in the case of homogeneous probability functions
    corecore