161 research outputs found

    What do the JAMA editors say when they discuss manuscripts that they are considering for publication? Developing a schema for classifying the content of editorial discussion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In an effort to identify previously unrecognized aspects of editorial decision-making, we explored the words and phrases that one group of editors used during their meetings.</p> <p>Methods</p> <p>We performed an observational study of discussions at manuscript meetings at <it>JAMA</it>, a major US general medical journal. One of us (KD) attended 12 editorial meetings in 2003 as a visitor and took notes recording phrases from discussion surrounding 102 manuscripts. In addition, editors attending the meetings completed a form for each manuscript considered, listing the reasons they were inclined to proceed to the next step in publication and reasons they were not (DR attended 4/12 meetings). We entered the spoken and written phrases into NVivo 2.0. We then developed a schema for classifying the editors' phrases, using an iterative approach.</p> <p>Results</p> <p>Our classification schema has three main themes: science, journalism, and writing. We considered 2,463 phrases, of which 87 related mainly to the manuscript topic and were not classified (total 2,376 classified). Phrases related to science predominated (1,274 or 54%). The editors, most of whom were physicians, also placed major weight on goals important to JAMA's mission (journalism goals) such as importance to medicine, strategic emphasis for the journal, interest to the readership, and results (729 or 31% of phrases). About 16% (n = 373) of the phrases used related to writing issues, such as clarity and responses to the referees' comments.</p> <p>Conclusion</p> <p>Classification of editorial discourse provides insight into editorial decision making and concepts that need exploration in future studies.</p

    Publication bias in clinical trials

    Get PDF
    This is the protocol for a review and there is no abstract. The objectives are as follows: To summarise evidence of publication bias for trials of health care interventions.Output Type: Protoco

    ClinicalTrials.gov registration can supplement information in abstracts for systematic reviews: a comparisonstudy.

    Get PDF
    PMC3689057BACKGROUND: The inclusion of randomized controlled trials (RCTs) reported in conference abstracts in systematic reviews is controversial, partly because study design information and risk of bias is often not fully reported in the abstract. The Association for Research in Vision and Ophthalmology (ARVO) requires trial registration of abstracts submitted for their annual conference as of 2007. Our goal was to assess the feasibility of obtaining study design information critical to systematic reviews, but not typically included in conference abstracts, from the trial registration record. METHODS: We reviewed all conference abstracts presented at the ARVO meetings from 2007 through 2009, and identified 496 RCTs; 154 had a single matching registration record in ClinicalTrials.gov. Two individuals independently extracted information from the abstract and the ClinicalTrials.gov record, including study design, sample size, inclusion criteria, masking, interventions, outcomes, funder, and investigator name and contact information. Discrepancies were resolved by consensus. We assessed the frequencies of reporting variables appearing in the abstract and the trial register and assessed agreement of information reported in both sources. RESULTS: We found a substantial amount of study design information in the ClinicalTrials.gov record that was unavailable in the corresponding conference abstract, including eligibility criteria associated with gender (83%; 128/154); masking or blinding of study participants (53%, 82/154), persons administering treatment (30%, 46/154), and persons measuring the outcomes (40%, 61/154)); and number of study centers (58%; 90/154). Only 34% (52/154) of abstracts explicitly described a primary outcome, but a primary outcome was included in the "Primary Outcome" field in the ClinicalTrials.gov record for 82% (126/154) of studies. One or more study interventions were reported in each abstract, but agreed exactly with those reported in ClinicalTrials.gov only slightly more than half the time (88/154, 56%). We found no contact information for study investigators in the abstract, but this information was available in less than one quarter of ClinicalTrial.gov records (17%; 26/154). CONCLUSION: RCT design information not reported in conference abstracts is often available in the corresponding ClinicalTrials.gov registration record. Sometimes there is conflicting information reported in the two sources and further contact with the trial investigators may still be required.JH Libraries Open Access Fun

    Compliance of clinical trial registries with the World Health Organization minimum data set: a survey

    Get PDF
    BACKGROUND: Since September 2005 the International Committee of Medical Journal Editors has required that trials be registered in accordance with the World Health Organization (WHO) minimum dataset, in order to be considered for publication. The objective is to evaluate registries' and individual trial records' compliance with the 2006 version of the WHO minimum data set. METHODS: A retrospective evaluation of 21 online clinical trial registries (international, national, specialty, pharmaceutical industry and local) from April 2005 to February 2007 and a cross-sectional evaluation of a stratified random sample of 610 trial records from the 21 registries. RESULTS: Among 11 registries that provided guidelines for registration, the median compliance with the WHO criteria were 14 out of 20 items (range 6 to 20). In the period April 2005-February 2007, six registries increased their compliance by six data items, on average. None of the local registry websites published guidelines on the trial data items required for registration. Slightly more than half (330/610; 54.1%, 95% CI 50.1% - 58.1%) of trial records completed the contact details criteria while 29.7% (181/610, 95% CI 26.1% - 33.5%) completed the key clinical and methodological data fields. CONCLUSION: While the launch of the WHO minimum data set seemed to positively influence registries with better standardisation of approaches, individual registry entries are largely incomplete. Initiatives to ensure quality assurance of registries and trial data should be encouraged. Peer reviewers and editors should scrutinise clinical trial registration records to ensure consistency with WHO's core content requirements when considering trial-related publications

    Comparative Effectiveness Research: Challenges for Medical Journals

    Get PDF
    Editors from a number of medical journals lay out principles for journals considering publication of Comparative Effectiveness Research (CER). In order to encourage dissemination of this editorial, this article is freely available in PLoS Medicine and will be also published in Medical Decision Making, Croatian Medical Journal, The Cochrane Library, Trials, The American Journal of Managed Care, and Journal of Clinical Epidemiology

    Future Directions for Cardiovascular Disease Comparative Effectiveness Research Report of a Workshop Sponsored by the National Heart, Lung, and Blood Institute

    Get PDF
    Comparative effectiveness research (CER) aims to provide decision makers with the evidence needed to evaluate the benefits and harms of alternative clinical management strategies. CER has become a national priority, with considerable new research funding allocated. Cardiovascular disease is a priority area for CER. This workshop report provides an overview of CER methods, with an emphasis on practical clinical trials and observational treatment comparisons. The report also details recommendations to the National Heart, Lung, and Blood Institute for a new framework for evidence development to foster cardiovascular CER, and specific studies to address 8 clinical issues identified by the Institute of Medicine as high priorities for cardiovascular CER

    A randomized trial provided new evidence on the accuracy and efficiency of traditional vs. electronically annotated abstraction approaches in systematic reviews

    Get PDF
    Abstract Objectives Data Abstraction Assistant (DAA) is a software for linking items abstracted into a data collection form for a systematic review to their locations in a study report. We conducted a randomized cross-over trial that compared DAA-facilitated single-data abstraction plus verification ("DAA verification"), single data abstraction plus verification ("regular verification"), and independent dual data abstraction plus adjudication ("independent abstraction"). Study Design and Setting This study is an online randomized cross-over trial with 26 pairs of data abstractors. Each pair abstracted data from six articles, two per approach. Outcomes were the proportion of errors and time taken. Results Overall proportion of errors was 17% for DAA verification, 16% for regular verification, and 15% for independent abstraction. DAA verification was associated with higher odds of errors when compared with regular verification (adjusted odds ratio [OR] = 1.08; 95% confidence interval [CI]: 0.99–1.17) or independent abstraction (adjusted OR = 1.12; 95% CI: 1.03–1.22). For each article, DAA verification took 20 minutes (95% CI: 1–40) longer than regular verification, but 46 minutes (95% CI: 26 to 66) shorter than independent abstraction. Conclusion Independent abstraction may only be necessary for complex data items. DAA provides an audit trail that is crucial for reproducible research

    Network meta-analysis-highly attractive but more methodological research is needed

    Get PDF
    Network meta-analysis, in the context of a systematic review, is a meta-analysis in which multiple treatments (that is, three or more) are being compared using both direct comparisons of interventions within randomized controlled trials and indirect comparisons across trials based on a common comparator. To ensure validity of findings from network meta-analyses, the systematic review must be designed rigorously and conducted carefully. Aspects of designing and conducting a systematic review for network meta-analysis include defining the review question, specifying eligibility criteria, searching for and selecting studies, assessing risk of bias and quality of evidence, conducting a network meta-analysis, interpreting and reporting findings. This commentary summarizes the methodologic challenges and research opportunities for network meta-analysis relevant to each aspect of the systematic review process based on discussions at a network meta-analysis methodology meeting we hosted in May 2010 at the Johns Hopkins Bloomberg School of Public Health. Since this commentary reflects the discussion at that meeting, it is not intended to provide an overview of the field

    Development and validation of a computerized expert system for evaluation of automated visual fields from the Ischemic Optic Neuropathy Decompression Trial

    Get PDF
    BACKGROUND: The objective of this report is to describe the methods used to develop and validate a computerized system to analyze Humphrey visual fields obtained from patients with non-arteritic anterior ischemic optic neuropathy (NAION) and enrolled in the Ischemic Optic Neuropathy Decompression Trial (IONDT). The IONDT was a multicenter study that included randomized and non-randomized patients with newly diagnosed NAION in the study eye. At baseline, randomized eyes had visual acuity of 20/64 or worse and non-randomized eyes had visual acuity of better than 20/64 or were associated with patients refusing randomization. Visual fields were measured before treatment using the Humphrey Field Analyzer with the 24-2 program, foveal threshold, and size III stimulus. METHODS: We used visual fields from 189 non-IONDT eyes with NAION to develop the computerized classification system. Six neuro-ophthalmologists ("expert panel") described definitions for visual field patterns defects using 19 visual fields representing a range of pattern defect types. The expert panel then used 120 visual fields, classified using these definitions, to refine the rules, generating revised definitions for 13 visual field pattern defects and 3 levels of severity. These definitions were incorporated into a rule-based computerized classification system run on Excel(® )software. The computerized classification system was used to categorize visual field defects for an additional 95 NAION visual fields, and the expert panel was asked to independently classify the new fields and subsequently whether they agreed with the computer classification. To account for test variability over time, we derived an adjustment factor from the pooled short term fluctuation. We examined change in defects with and without adjustment in visual fields of study participants who demonstrated a visual acuity decrease within 30 days of NAION onset (progressive NAION). RESULTS: Despite an agreed upon set of rules, there was not good agreement among the expert panel when their independent visual classifications were compared. A majority did concur with the computer classification for 91 of 95 visual fields. Remaining classification discrepancies could not be resolved without modifying existing definitions. Without using the adjustment factor, visual fields of 63.6% (14/22) patients with progressive NAION and no central defect, and all (7/7) patients with a paracentral defect, worsened within 30 days of NAION onset. After applying the adjustment factor, the visual fields of the same patients with no initial central defect and 5/7 of the patients with a paracentral defect were seen to worsen. CONCLUSION: The IONDT developed a rule-based computerized system that consistently defines pattern and severity of visual fields of NAION patients for use in a research setting
    • …
    corecore