112,295 research outputs found

    Science, Values, and the Priority of Evidence

    Get PDF
    It is now commonly held that values play a role in scientific judgment, but many arguments for that conclusion are limited. First, many arguments do not show that values are, strictly speaking, indispensable. The role of values could in principle be filled by a random or arbitrary decision. Second, many arguments concern scientific theories and concepts which have obvious practical consequences, thus suggesting or at least leaving open the possibility that abstruse sciences without such a connection could be value-free. Third, many arguments concern the role values play in inferring from evidence, thus taking evidence as given. This paper argues that these limitations do not hold in general. There are values involved in every scientific judgment. They cannot even conceivably be replaced by a coin toss, they arise as much for exotic as for practical sciences, and they are at issue as much for observation as for explicit inference

    Using Systematic Thinking to Choose and Evaluate Evidence

    Get PDF

    Metacognitive Development and Conceptual Change in Children

    Get PDF
    There has been little investigation to date of the way metacognition is involved in conceptual change. It has been recognised that analytic metacognition is important to the way older children acquire more sophisticated scientific and mathematical concepts at school. But there has been barely any examination of the role of metacognition in earlier stages of concept acquisition, at the ages that have been the major focus of the developmental psychology of concepts. The growing evidence that even young children have a capacity for procedural metacognition raises the question of whether and how these abilities are involved in conceptual development. More specifically, are there developmental changes in metacognitive abilities that have a wholescale effect on the way children acquire new concepts and replace existing concepts? We show that there is already evidence of at least one plausible example of such a link and argue that these connections deserve to be investigated systematically

    A Bayesian approach to stochastic cost-effectiveness analysis

    Get PDF
    The aim of this paper is to discuss the use of Bayesian methods in cost-effectiveness analysis (CEA) and the common ground between Bayesian and traditional frequentist approaches. A further aim is to explore the use of the net benefit statistic and its advantages over the incremental cost-effectiveness ratio (ICER) statistic. In particular, the use of cost-effectiveness acceptability curves is examined as a device for presenting the implications of uncertainty in a CEA to decision makers. Although it is argued that the interpretation of such curves as the probability that an intervention is cost-effective given the data requires a Bayesian approach, this should generate no misgivings for the frequentist. Furthermore, cost-effectiveness acceptability curves estimated using the net benefit statistic are exactly equivalent to those estimated from an appropriate analysis of ICERs on the cost-effectiveness plane. The principles examined in this paper are illustrated by application to the cost-effectiveness of blood pressure control in the U.K. Prospective Diabetes Study (UKPDS 40). Due to a lack of good-quality prior information on the cost and effectiveness of blood pressure control in diabetes, a Bayesian analysis assuming an uninformative prior is argued to be most appropriate. This generates exactly the same cost-effectiveness results as a standard frequentist analysis

    Lost in Translation: Piloting a Novel Framework to Assess the Challenges in Translating Scientific Uncertainty From Empirical Findings to WHO Policy Statements.

    Get PDF
    BACKGROUND:Calls for evidence-informed public health policy, with implicit promises of greater program effectiveness, have intensified recently. The methods to produce such policies are not self-evident, requiring a conciliation of values and norms between policy-makers and evidence producers. In particular, the translation of uncertainty from empirical research findings, particularly issues of statistical variability and generalizability, is a persistent challenge because of the incremental nature of research and the iterative cycle of advancing knowledge and implementation. This paper aims to assess how the concept of uncertainty is considered and acknowledged in World Health Organization (WHO) policy recommendations and guidelines. METHODS:We selected four WHO policy statements published between 2008-2013 regarding maternal and child nutrient supplementation, infant feeding, heat action plans, and malaria control to represent topics with a spectrum of available evidence bases. Each of these four statements was analyzed using a novel framework to assess the treatment of statistical variability and generalizability. RESULTS:WHO currently provides substantial guidance on addressing statistical variability through GRADE (Grading of Recommendations Assessment, Development, and Evaluation) ratings for precision and consistency in their guideline documents. Accordingly, our analysis showed that policy-informing questions were addressed by systematic reviews and representations of statistical variability (eg, with numeric confidence intervals). In contrast, the presentation of contextual or "background" evidence regarding etiology or disease burden showed little consideration for this variability. Moreover, generalizability or "indirectness" was uniformly neglected, with little explicit consideration of study settings or subgroups. CONCLUSION:In this paper, we found that non-uniform treatment of statistical variability and generalizability factors that may contribute to uncertainty regarding recommendations were neglected, including the state of evidence informing background questions (prevalence, mechanisms, or burden or distributions of health problems) and little assessment of generalizability, alternate interventions, and additional outcomes not captured by systematic review. These other factors often form a basis for providing policy recommendations, particularly in the absence of a strong evidence base for intervention effects. Consequently, they should also be subject to stringent and systematic evaluation criteria. We suggest that more effort is needed to systematically acknowledge (1) when evidence is missing, conflicting, or equivocal, (2) what normative considerations were also employed, and (3) how additional evidence may be accrued

    Flow-based reputation with uncertainty: Evidence-Based Subjective Logic

    Full text link
    The concept of reputation is widely used as a measure of trustworthiness based on ratings from members in a community. The adoption of reputation systems, however, relies on their ability to capture the actual trustworthiness of a target. Several reputation models for aggregating trust information have been proposed in the literature. The choice of model has an impact on the reliability of the aggregated trust information as well as on the procedure used to compute reputations. Two prominent models are flow-based reputation (e.g., EigenTrust, PageRank) and Subjective Logic based reputation. Flow-based models provide an automated method to aggregate trust information, but they are not able to express the level of uncertainty in the information. In contrast, Subjective Logic extends probabilistic models with an explicit notion of uncertainty, but the calculation of reputation depends on the structure of the trust network and often requires information to be discarded. These are severe drawbacks. In this work, we observe that the `opinion discounting' operation in Subjective Logic has a number of basic problems. We resolve these problems by providing a new discounting operator that describes the flow of evidence from one party to another. The adoption of our discounting rule results in a consistent Subjective Logic algebra that is entirely based on the handling of evidence. We show that the new algebra enables the construction of an automated reputation assessment procedure for arbitrary trust networks, where the calculation no longer depends on the structure of the network, and does not need to throw away any information. Thus, we obtain the best of both worlds: flow-based reputation and consistent handling of uncertainties
    corecore