204 research outputs found

    When Should Potentially False Research Findings Be Considered Acceptable?

    Get PDF
    Ioannidis estimated that most published research findings are false [1], but he did not indicate when, if at all, potentially false research results may be considered as acceptable to society. We combined our two previously published models [2,3] to calculate the probability above which research findings may become acceptable. A new model indicates that the probability above which research results should be accepted depends on the expected payback from the research (the benefits) and the inadvertent consequences (the harms). This probability may dramatically change depending on our willingness to tolerate error in accepting false research findings. Our acceptance of research findings changes as a function of what we call “acceptable regret,” i.e., our tolerance of making a wrong decision in accepting the research hypothesis. We illustrate our findings by providing a new framework for early stopping rules in clinical research (i.e., when should we accept early findings from a clinical trial indicating the benefits as true?). Obtaining absolute “truth” in research is impossible, and so society has to decide when less-than-perfect results may become acceptable

    A regret theory approach to decision curve analysis: A novel method for eliciting decision makers' preferences and decision-making

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Decision curve analysis (DCA) has been proposed as an alternative method for evaluation of diagnostic tests, prediction models, and molecular markers. However, DCA is based on expected utility theory, which has been routinely violated by decision makers. Decision-making is governed by intuition (system 1), and analytical, deliberative process (system 2), thus, rational decision-making should reflect both formal principles of rationality and intuition about good decisions. We use the cognitive emotion of regret to serve as a link between systems 1 and 2 and to reformulate DCA.</p> <p>Methods</p> <p>First, we analysed a classic decision tree describing three decision alternatives: treat, do not treat, and treat or no treat based on a predictive model. We then computed the expected regret for each of these alternatives as the difference between the utility of the action taken and the utility of the action that, in retrospect, should have been taken. For any pair of strategies, we measure the difference in net expected regret. Finally, we employ the concept of acceptable regret to identify the circumstances under which a potentially wrong strategy is tolerable to a decision-maker.</p> <p>Results</p> <p>We developed a novel dual visual analog scale to describe the relationship between regret associated with "omissions" (e.g. failure to treat) vs. "commissions" (e.g. treating unnecessary) and decision maker's preferences as expressed in terms of threshold probability. We then proved that the Net Expected Regret Difference, first presented in this paper, is equivalent to net benefits as described in the original DCA. Based on the concept of acceptable regret we identified the circumstances under which a decision maker tolerates a potentially wrong decision and expressed it in terms of probability of disease.</p> <p>Conclusions</p> <p>We present a novel method for eliciting decision maker's preferences and an alternative derivation of DCA based on regret theory. Our approach may be intuitively more appealing to a decision-maker, particularly in those clinical situations when the best management option is the one associated with the least amount of regret (e.g. diagnosis and treatment of advanced cancer, etc).</p

    Quality of Pharmaceutical Advertisements in Medical Journals: A Systematic Review

    Get PDF
    Journal advertising is one of the main sources of medicines information to doctors. Despite the availability of regulations and controls of drug promotion worldwide, information on medicines provided in journal advertising has been criticized in several studies for being of poor quality. However, no attempt has been made to systematically summarise this body of research. We designed this systematic review to assess all studies that have examined the quality of pharmaceutical advertisements for prescription products in medical and pharmacy journals.Studies were identified via searching electronic databases, web library, search engine and reviewing citations (1950 - February 2006). Only articles published in English and examined the quality of information included in pharmaceutical advertisements for prescription products in medical or pharmacy journals were included. For each eligible article, a researcher independently extracted the data on the study methodology and outcomes. The data were then reviewed by a second researcher. Any disagreements were resolved by consensus. The data were analysed descriptively. The final analysis included 24 articles. The studies reviewed advertisements from 26 countries. The number of journals surveyed in each study ranged from four to 24 journals. Several outcome measures were examined including references and claims provided in advertisements, availability of product information, adherence to codes or guidelines and presentation of risk results. The majority of studies employed a convenience-sampling method. Brand name, generic name and indications were usually provided. Journal articles were commonly cited to support pharmaceutical claims. Less than 67% of the claims were supported by a systematic review, a meta-analysis or a randomised control trial. Studies that assessed misleading claims had at least one advertisement with a misleading claim. Two studies found that less than 28% of claims were unambiguous clinical claims. Most advertisements with quantitative information provided risk results as relative risk reduction. Studies were conducted in 26 countries only and then the generalizability of the results is limited.Evidence from this review indicates that low quality of journal advertising is a global issue. As information provided in journal advertising has the potential to change doctors' prescribing behaviour, ongoing efforts to increase education about drug promotion are crucial. The results from our review suggest the need for a global pro-active and effective regulatory system to ensure that information provided in medical journal advertising is supporting the quality use of medicines

    Screening for prostate cancer: systematic review and meta-analysis of randomised controlled trials

    Get PDF
    Objective To examine the evidence on the benefits and harms of screening for prostate cancer

    JAMA Published Fewer Industry-Funded Studies after Introducing a Requirement for Independent Statistical Analysis

    Get PDF
    BACKGROUND: JAMA introduced a requirement for independent statistical analysis for industry-funded trials in July 2005. We wanted to see whether this policy affected the number of industry-funded trials published by JAMA. METHODS AND FINDINGS: We undertook a retrospective, before-and-after study of published papers. Two investigators independently extracted data from all issues of JAMA published between 1 July 2002 and 30 June 2008 (i.e., three years before and after the policy). They were not blinded to publication date. The randomized controlled trials (RCTs) were classified as industry funded (IF), joint industry/non-commercial funding (J), industry supported (IS) (when manufacturers provided materials only), non-commercial (N) or funding not stated (NS). Findings were compared and discrepancies resolved by discussion or further analysis of the reports. RCTs published in The Lancet and NEJM over the same period were used as a control group. Between July 2002 and July 2008, JAMA published 1,314 papers, of which 311 were RCTs. The number of industry studies (IF, J or IS) fell significantly after the policy (p = 0.02) especially for categories J and IS. However, over the same period, the number of industry studies rose in both The Lancet and NEJM. CONCLUSIONS: After the requirement for independent statistical analysis for industry-funded studies, JAMA published significantly fewer RCTs and significantly fewer industry-funded RCTs. This pattern was not seen in the control journals. This suggests the JAMA policy affected the number of submissions, the acceptance rate, or both. Without analysing the submissions, we cannot check these hypotheses but, assuming the number of published papers is related to the number submitted, our findings suggest that JAMA's policy may have resulted in a significant reduction in the number of industry-sponsored trials it received and published

    Concordance between decision analysis and matching systematic review of randomized controlled trials in assessment of treatment comparisons: a systematic review

    Get PDF
    BACKGROUND: Systematic review (SR) of randomized controlled trials (RCT) is the gold standard for informing treatment choice. Decision analyses (DA) also play an important role in informing health care decisions. It is unknown how often the results of DA and matching SR of RCTs are in concordance. We assessed whether the results of DA are in concordance with SR of RCTs matched on patient population, intervention, control, and outcomes. METHODS: We searched PubMed up to 2008 for DAs comparing at least two interventions followed by matching SRs of RCTs. Data were extracted on patient population, intervention, control, and outcomes from DAs and matching SRs of RCTs. Data extraction from DAs was done by one reviewer and from SR of RCTs by two independent reviewers. RESULTS: We identified 28 DAs representing 37 comparisons for which we found matching SR of RCTs. Results of the DAs and SRs of RCTs were in concordance in 73% (27/37) of cases. The sensitivity analyses conducted in either DA or SR of RCTs did not impact the concordance. Use of single (4/37) versus multiple data source (33/37) in design of DA model was statistically significantly associated with concordance between DA and SR of RCTs. CONCLUSIONS: Our findings illustrate the high concordance of current DA models compared with SR of RCTs. It is shown previously that there is 50% concordance between DA and matching single RCT. Our study showing the concordance of 73% between DA and matching SR of RCTs underlines the importance of totality of evidence (i.e. SR of RCTs) in the design of DA models and in general medical decision-making
    corecore