126 research outputs found

    Examining agreement between clinicians when assessing sick children.

    Get PDF
    BACKGROUND: Case management guidelines use a limited set of clinical features to guide assessment and treatment for common childhood diseases in poor countries. Using video records of clinical signs we assessed agreement among experts and assessed whether Kenyan health workers could identify signs defined by expert consensus. METHODOLOGY: 104 videos representing 11 clinical sign categories were presented to experts using a web questionnaire. Proportionate agreement and agreement beyond chance were calculated using kappa and the AC1 statistic. 31 videos were selected and presented to local health workers, 20 for which experts had demonstrated clear agreement and 11 for which experts could not demonstrate agreement. PRINCIPAL FINDINGS: Experts reached very high level of chance adjusted agreement for some videos while for a few videos no agreement beyond chance was found. Where experts agreed Kenyan hospital staff of all cadres recognised signs with high mean sensitivity and specificity (sensitivity: 0.897-0.975, specificity: 0.813-0.894); years of experience, gender and hospital had no influence on mean sensitivity or specificity. Local health workers did not agree on videos where experts had low or no agreement. Results of different agreement statistics for multiple observers, the AC1 and Fleiss' kappa, differ across the range of proportionate agreement. CONCLUSION: Videos provide a useful means to test agreement amongst geographically diverse groups of health workers. Kenyan health workers are in agreement with experts where clinical signs are clear-cut supporting the potential value of assessment and management guidelines. However, clinical signs are not always clear-cut. Video recordings offer one means to help standardise interpretation of clinical signs

    Changes in Clinical Trials Methodology Over Time: A Systematic Review of Six Decades of Research in Psychopharmacology

    Get PDF
    Background: There have been many changes in clinical trials methodology since the introduction of lithium and the beginning of the modern era of psychopharmacology in 1949. The nature and importance of these changes have not been fully addressed to date. As methodological flaws in trials can lead to false-negative or false-positive results, the objective of our study was to evaluate the impact of methodological changes in psychopharmacology clinical research over the past 60 years. Methodology/Principal Findings: We performed a systematic review from 1949 to 2009 on MEDLINE and Web of Science electronic databases, and a hand search of high impact journals on studies of seven major drugs (chlorpromazine, clozapine, risperidone, lithium, fluoxetine and lamotrigine). All controlled studies published 100 months after the first trial were included. Ninety-one studies met our inclusion criteria. We analyzed the major changes in abstract reporting, study design, participants' assessment and enrollment, methodology and statistical analysis. Our results showed that the methodology of psychiatric clinical trials changed substantially, with quality gains in abstract reporting, results reporting, and statistical methodology. Recent trials use more informed consent, periods of washout, intention-to-treat approach and parametric tests. Placebo use remains high and unchanged over time. Conclusions/Significance: Clinical trial quality of psychopharmacological studies has changed significantly in most of the aspects we analyzed. There was significant improvement in quality reporting and internal validity. These changes have increased study efficiency; however, there is room for improvement in some aspects such as rating scales, diagnostic criteria and better trial reporting. Therefore, despite the advancements observed, there are still several areas that can be improved in psychopharmacology clinical trials

    CONSORT 2010 statement: extension checklist for reporting within person randomised trials.

    Get PDF
    Evidence shows that the quality of reporting of randomised controlled trials (RCTs) is not optimal. The lack of transparent reporting impedes readers from judging the reliability and validity of trial findings and researchers from extracting information for systematic reviews and results in research waste. The Consolidated Standards of Reporting Trials (CONSORT) statement was developed to improve the reporting of RCTs. Within person trials are used for conditions that can affect two or more body sites, and are a useful and efficient tool because the comparisons between interventions are within people. Such trials are most commonly conducted in ophthalmology, dentistry, and dermatology. The reporting of within person trials has, however, been variable and incomplete, hindering their use in clinical decision making and by future researchers. This document presents the CONSORT extension to within person trials. It aims to facilitate the reporting of these trials. It extends 16 items of the CONSORT 2010 checklist and introduces a modified flowchart and baseline table to enhance transparency. Examples of good reporting and evidence based rationale for CONSORT within person checklist items are provided

    Looking for Landmarks: The Role of Expert Review and Bibliometric Analysis in Evaluating Scientific Publication Outputs

    Get PDF
    To compare expert assessment with bibliometric indicators as tools to assess the quality and importance of scientific research papers.Shortly after their publication in 2005, the quality and importance of a cohort of nearly 700 Wellcome Trust (WT) associated research papers were assessed by expert reviewers; each paper was reviewed by two WT expert reviewers. After 3 years, we compared this initial assessment with other measures of paper impact.Shortly after publication, 62 (9%) of the 687 research papers were determined to describe at least a ‘major addition to knowledge’ –6 were thought to be ‘landmark’ papers. At an aggregate level, after 3 years, there was a strong positive association between expert assessment and impact as measured by number of citations and F1000 rating. However, there were some important exceptions indicating that bibliometric measures may not be sufficient in isolation as measures of research quality and importance, and especially not for assessing single papers or small groups of research publications. expert peer reviews of research paper quality and importance to more quantitative indicators, such as citation analysis would be valuable additions to the field of research assessment and evaluation

    Updating Systematic Reviews: An International Survey

    Get PDF
    BACKGROUND: Systematic reviews (SRs) should be up to date to maintain their importance in informing healthcare policy and practice. However, little guidance is available about when and how to update SRs. Moreover, the updating policies and practices of organizations that commission or produce SRs are unclear. METHODOLOGY/PRINCIPAL FINDINGS: The objective was to describe the updating practices and policies of agencies that sponsor or conduct SRs. An Internet-based survey was administered to a purposive non-random sample of 195 healthcare organizations within the international SR community. Survey results were analyzed using descriptive statistics. The completed response rate was 58% (n = 114) from across 26 countries with 70% (75/107) of participants identified as producers of SRs. Among responders, 79% (84/107) characterized the importance of updating as high or very-high and 57% (60/106) of organizations reported to have a formal policy for updating. However, only 29% (35/106) of organizations made reference to a written policy document. Several groups (62/105; 59%) reported updating practices as irregular, and over half (53/103) of organizational respondents estimated that more than 50% of their respective SRs were likely out of date. Authors of the original SR (42/106; 40%) were most often deemed responsible for ensuring SRs were current. Barriers to updating included resource constraints, reviewer motivation, lack of academic credit, and limited publishing formats. Most respondents (70/100; 70%) indicated that they supported centralization of updating efforts across institutions or agencies. Furthermore, 84% (83/99) of respondents indicated they favoured the development of a central registry of SRs, analogous to efforts within the clinical trials community. CONCLUSIONS/SIGNIFICANCE: Most organizations that sponsor and/or carry out SRs consider updating important. Despite this recognition, updating practices are not regular, and many organizations lack a formal written policy for updating SRs. This research marks the first baseline data available on updating from an organizational perspective

    ClinicalTrials.gov registration can supplement information in abstracts for systematic reviews: a comparisonstudy.

    Get PDF
    PMC3689057BACKGROUND: The inclusion of randomized controlled trials (RCTs) reported in conference abstracts in systematic reviews is controversial, partly because study design information and risk of bias is often not fully reported in the abstract. The Association for Research in Vision and Ophthalmology (ARVO) requires trial registration of abstracts submitted for their annual conference as of 2007. Our goal was to assess the feasibility of obtaining study design information critical to systematic reviews, but not typically included in conference abstracts, from the trial registration record. METHODS: We reviewed all conference abstracts presented at the ARVO meetings from 2007 through 2009, and identified 496 RCTs; 154 had a single matching registration record in ClinicalTrials.gov. Two individuals independently extracted information from the abstract and the ClinicalTrials.gov record, including study design, sample size, inclusion criteria, masking, interventions, outcomes, funder, and investigator name and contact information. Discrepancies were resolved by consensus. We assessed the frequencies of reporting variables appearing in the abstract and the trial register and assessed agreement of information reported in both sources. RESULTS: We found a substantial amount of study design information in the ClinicalTrials.gov record that was unavailable in the corresponding conference abstract, including eligibility criteria associated with gender (83%; 128/154); masking or blinding of study participants (53%, 82/154), persons administering treatment (30%, 46/154), and persons measuring the outcomes (40%, 61/154)); and number of study centers (58%; 90/154). Only 34% (52/154) of abstracts explicitly described a primary outcome, but a primary outcome was included in the "Primary Outcome" field in the ClinicalTrials.gov record for 82% (126/154) of studies. One or more study interventions were reported in each abstract, but agreed exactly with those reported in ClinicalTrials.gov only slightly more than half the time (88/154, 56%). We found no contact information for study investigators in the abstract, but this information was available in less than one quarter of ClinicalTrial.gov records (17%; 26/154). CONCLUSION: RCT design information not reported in conference abstracts is often available in the corresponding ClinicalTrials.gov registration record. Sometimes there is conflicting information reported in the two sources and further contact with the trial investigators may still be required.JH Libraries Open Access Fun

    Time to Update and Quantitative Changes in the Results of Cochrane Pregnancy and Childbirth Reviews

    Get PDF
    BACKGROUND: The recommended interval between updates for systematic reviews included in The Cochrane Library is 2 years. However, it is unclear whether this interval is always appropriate. Whereas excessive updating wastes time and resources, insufficient updating allows out-of-date or incomplete evidence to guide clinical decision-making. We set out to determine, for Cochrane pregnancy and childbirth reviews, the frequency of updates, factors associated with updating, and whether updating frequency was appropriate. METHODOLOGY/PRINCIPAL FINDINGS: Cochrane pregnancy and childbirth reviews published in Issue 3, 2007 of the Cochrane Database of Systematic Reviews were retrieved, and data were collected from their original and updated versions. Quantitative changes were determined for one of the primary outcomes (mortality, or the outcome of greatest clinical significance). Potential factors associated with time to update were assessed using the Cox proportional hazard model. Among the 101 reviews in our final sample, the median time before the first update was 3.3 years (95% CI 2.7-3.8). Only 32.7% had been updated within the recommended interval of 2 years. In 75.3% (76/101), a median of 3 new trials with a median of 576 additional participants were included in the updated versions. There were quantitative changes in 71% of the reviews that included new trials (54/76): the median change in effect size was 18.2%, and the median change in 95% CI width was 30.8%. Statistical significance changed in 18.5% (10/54) of these reviews, but conclusions were revised in only 3.7% (2/54). A shorter time to update was associated with the same original review team at updating. CONCLUSIONS/SIGNIFICANCE: Most reviews were updated less frequently than recommended by Cochrane policy, but few updates had revised conclusions. Prescribed time to update should be reconsidered to support improved decision-making while making efficient use of limited resources

    The Quality of Registration of Clinical Trials

    Get PDF
    BACKGROUND: Lack of transparency in clinical trial conduct, publication bias and selective reporting bias are still important problems in medical research. Through clinical trials registration, it should be possible to take steps towards resolving some of these problems. However, previous evaluations of registered records of clinical trials have shown that registered information is often incomplete and non-meaningful. If these studies are accurate, this negates the possible benefits of registration of clinical trials. METHODS AND FINDINGS: A 5% sample of records of clinical trials that were registered between 17 June 2008 and 17 June 2009 was taken from the International Clinical Trials Registry Platform (ICTRP) database and assessed for the presence of contact information, the presence of intervention specifics in drug trials and the quality of primary and secondary outcome reporting. 731 records were included. More than half of the records were registered after recruitment of the first participant. The name of a contact person was available in 94.4% of records from non-industry funded trials and 53.7% of records from industry funded trials. Either an email address or a phone number was present in 76.5% of non-industry funded trial records and in 56.5% of industry funded trial records. Although a drug name or company serial number was almost always provided, other drug intervention specifics were often omitted from registration. Of 3643 reported outcomes, 34.9% were specific measures with a meaningful time frame. CONCLUSIONS: Clinical trials registration has the potential to contribute substantially to improving clinical trial transparency and reducing publication bias and selective reporting. These potential benefits are currently undermined by deficiencies in the provision of information in key areas of registered records
    • …
    corecore