59 research outputs found

    E-synthesis for carcinogenicity assessments: A case study of processed meat

    Get PDF
    Rationale, Aims and Objectives Recent controversies about dietary advice concerning meat demonstrate that aggregating the available evidence to assess a putative causal link between food and cancer is a challenging enterprise. Methods We show how a tool developed for assessing putative causal links between drugs and adverse drug reactions, E-Synthesis, can be applied for food carcinogenicity assessments. The application is demonstrated on the putative causal relationship between processed meat consumption and cancer. Results The output of the assessment is a Bayesian probability that processed meat consumption causes cancer. This Bayesian probability is calculated from a Bayesian network model, which incorporates a representation of Bradford Hill's Guidelines as probabilistic indicators of causality. We show how to determine probabilities of indicators of causality for food carcinogenicity assessments based on assessments of the International Agency for Research on Cancer. Conclusions We find that E-Synthesis is a tool well-suited for food carcinogenicity assessments, as it enables a graphical representation of lines and weights of evidence, offers the possibility to make a great number of judgements explicit and transparent, outputs a probability of causality suitable for decision making and is flexible to aggregate different kinds of evidence

    Making decisions with evidential probability and objective Bayesian calibration inductive logics

    Get PDF
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their performance relative to traditional Bayesianism? In this article, we develop an agent-based model of a classic binomial decision problem, including players based on variations of Evidential Probability and Objective Bayesianism. We compare the performances of these players, including against a benchmark player who uses standard Bayesian inductive logic. We find that the calibrated players can match the performance of the Bayesian player, but only with particular acceptance thresholds and decision rules. Among other points, our discussion raises some challenges for characterising “cautious” reasoning using imprecise probabilities. Thus, we demonstrate a new way of systematically comparing imprecise probability systems, and we conclude that calibration inductive logics are surprisingly promising for making decisions

    E-Synthesis: A Bayesian Framework for Causal Assessment in Pharmacosurveillance

    Get PDF
    Background: Evidence suggesting adverse drug reactions often emerges unsystematically and unpredictably in form of anecdotal reports, case series and survey data. Safety trials and observational studies also provide crucial information regarding the (un-)safety of drugs. Hence, integrating multiple types of pharmacovigilance evidence is key to minimising the risks of harm. Methods: In previous work, we began the development of a Bayesian framework for aggregating multiple types of evidence to assess the probability of a putative causal link between drugs and side effects. This framework arose out of a philosophical analysis of the Bradford Hill Guidelines. In this article, we expand the Bayesian framework and add “evidential modulators,” which bear on the assessment of the reliability of incoming study results. The overall framework for evidence synthesis, “E-Synthesis”, is then applied to a case study. Results: Theoretically and computationally, E-Synthesis exploits coherence of partly or fully independent evidence converging towards the hypothesis of interest (or of conflicting evidence with respect to it), in order to update its posterior probability. With respect to other frameworks for evidence synthesis, our Bayesian model has the unique feature of grounding its inferential machinery on a consolidated theory of hypothesis confirmation (Bayesian epistemology), and in allowing any data from heterogeneous sources (cell-data, clinical trials, epidemiological studies), and methods (e.g., frequentist hypothesis testing, Bayesian adaptive trials, etc.) to be quantitatively integrated into the same inferential framework. Conclusions: E-Synthesis is highly flexible concerning the allowed input, while at the same time relying on a consistent computational system, that is philosophically and statistically grounded. Furthermore, by introducing evidential modulators, and thereby breaking up the different dimensions of evidence (strength, relevance, reliability), E-Synthesis allows them to be explicitly tracked in updating causal hypotheses

    Fast Methods for Drug Approval: Research Perspectives for Pandemic Preparedness

    Get PDF
    Public heath emergencies such as the outbreak of novel infectious diseases represent a major challenge for drug regulatory bodies, practitioners, and scientific communities. In such critical situations drug regulators and public health practitioners base their decisions on evidence generated and synthesised by scientists. The urgency and novelty of the situation create high levels of uncertainty concerning the safety and effectiveness of drugs. One key tool to mitigate such emergencies is pandemic preparedness. There seems to be, however, a lack of scholarly work on methodology for assessments of new or existing drugs during a pandemic. Issues related to risk attitudes, evidence production and evidence synthesis for drug approval require closer attention. This manuscript, therefore, engages in a conceptual analysis of relevant issues of drug assessment during a pandemic. To this end, we rely in our analysis on recent discussions in the philosophy of science and the philosophy of medicine. Important unanswered foundational questions are identified and possible ways to answer them are considered. Similar problems often have similar solutions, hence studying similar situations can provide important clues. We consider drug assessments of orphan drugs and drug assessments during endemics as similar to drug assessment during a pandemic. Furthermore, other scientific fields which cannot carry out controlled experiments may guide the methodology to draw defeasible causal inferences from imperfect data. Future contributions on methodologies for addressing the issues raised here will indeed have great potential to improve pandemic preparedness

    Benchmark dose modeling for epidemiological dose-response assessment using prospective cohort studies

    Get PDF
    Benchmark dose (BMD) methodology has been employed as a default dose-response modeling approach to determine the toxicity value of chemicals to support regulatory chemical risk assessment. Especially, a relatively standardized BMD analysis framework has been established for modeling toxicological data regarding the formats of input data, dose-response models, definitions of benchmark response, and model uncertainty consideration. However, the BMD approach has not been well developed for epidemiological data mainly because of the diverse designs of epidemiological studies and various formats of data reported in the literature. Although most of the epidemiological BMD analyses were developed to solve a particular question, the methods proposed in two recent studies are able to handle cohort and case-control studies using summary data with consideration of adjustments for confounders. Therefore, the purpose of the present study is to investigate and compare the "effective count"-based BMD modeling approach and adjusted relative risk (RR)-based BMD analysis approach to identify an appropriate BMD modeling framework that can be generalized for analyzing published data of prospective cohort studies for BMD analysis. The two methods were applied to the same set of studies that investigated the association between bladder and lung cancer and inorganic arsenic exposure for BMD estimation. The results suggest that estimated BMDs and BMDLs are relatively consistent; however, with the consideration of established common practice in BMD analysis, modeling adjusted RR values as continuous data for BMD estimation is a more generalizable approach harmonized with the BMD approach using toxicological data

    Incentives for Research Effort: An Evolutionary Model of Publication Markets with Double-Blind and Open Review

    Get PDF
    Contemporary debates about scientific institutions and practice feature many proposed reforms. Most of these require increased efforts from scientists. But how do scientists' incentives for effort interact? How can scientific institutions encourage scientists to invest effort in research? We explore these questions using a game-theoretic model of publication markets. We employ a base game between authors and reviewers, before assessing some of its tendencies by means of analysis and simulations. We compare how the effort expenditures of these groups interact in our model under a variety of settings, such as double-blind and open review systems. We make a number of findings, including that open review can increase the effort of authors in a range of circumstances and that these effects can manifest in a policy-relevant period of time. However, we find that open review's impact on authors' efforts is sensitive to the strength of several other influences

    Pharmacovigilance as Personalized Medicine. In: Chiara Beneduce and Marta Bertolaso (eds.) Personalized Medicine: A Multidisciplinary Approach to Complexity, Springer Nature.

    Get PDF
    Personalized medicine relies on two points: 1) causal knowledge about the possible effects of X in a given statistical population; 2) assignment of the given individual to a suitable reference class. Regarding point 1, standard approaches to causal inference are generally considered to be characterized by a trade-off between how confidently one can establish causality in any given study (internal validity) and extrapolating such knowledge to specific target groups (external validity). Regarding point 2, it is uncertain which reference class leads to the most reliable inferences. Instead, pharmacovigilance focuses on both elements of the individual prediction at the same time, that is, the establishment of the possible causal link between a given drug and an observed adverse event, and the identification of possible subgroups, where such links may arise. We develop an epistemic framework that exploits the joint contribution of different dimensions of evidence and allows one to deal with the reference class problem not only by relying on statistical data about covariances, but also by drawing on causal knowledge. That is, the probability that a given individual will face a given side effect, will probabilistically depend on his characteristics and the plausible causal models in which such features become relevant. The evaluation of the causal models is grounded on the available evidence and theory
    corecore