61 research outputs found

    Crowdsourcing hypothesis tests: making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N > 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = -0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.info:eu-repo/semantics/submittedVersio

    Research designs considerations for chronic pain prevention clinical trials: IMMPACT recommendations

    Get PDF
    Although certain risk factors can identify individuals who aremost likely to develop chronic pain, few interventions to prevent chronic pain have been identified. To facilitate the identification of preventive interventions, an IMMPACTmeeting was convened to discuss research design considerations for clinical trials investigating the prevention of chronic pain. We present general design considerations for prevention trials in populations that are at relatively high risk for developing chronic pain. Specific design considerations included subject identification, timing and duration of treatment, outcomes, timing of assessment, and adjusting for risk factors in the analyses.We provide a detailed examination of 4 models of chronic pain prevention (ie, chronic postsurgical pain, postherpetic neuralgia, chronic low back pain, and painful chemotherapy-induced peripheral neuropathy). The issues discussed can, inmany instances, be extrapolated to other chronic pain conditions. These examples were selected because they are representative models of primary and secondary prevention, reflect persistent pain resulting from multiple insults (ie, surgery, viral infection, injury, and toxic or noxious element exposure), and are chronically painful conditions that are treated with a range of interventions. Improvements in the design of chronic pain prevention trials could improve assay sensitivity and thus accelerate the identification of efficacious interventions. Such interventions would have the potential to reduce the prevalence of chronic pain in the population. Additionally, standardization of outcomes in prevention clinical trials will facilitate meta-analyses and systematic reviews and improve detection of preventive strategies emerging from clinical trials

    Crowdsourcing hypothesis tests: Making transparent how design choices shape research results

    Get PDF
    To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer fiveoriginal research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams renderedstatistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.</div
    • …
    corecore