102 research outputs found

    Increasing value and reducing waste in biomedical research: who's listening?

    Get PDF
    The biomedical research complex has been estimated to consume almost a quarter of a trillion US dollars every year. Unfortunately, evidence suggests that a high proportion of this sum is avoidably wasted. In 2014, The Lancet published a series of five reviews showing how dividends from the investment in research might be increased from the relevance and priorities of the questions being asked, to how the research is designed, conducted, and reported. 17 recommendations were addressed to five main stakeholders-funders, regulators, journals, academic institutions, and researchers. This Review provides some initial observations on the possible effects of the Series, which seems to have provoked several important discussions and is on the agendas of several key players. Some examples of individual initiatives show ways to reduce waste and increase value in biomedical research. This momentum will probably move strongly across stakeholder groups, if collaborative relationships evolve between key players; further important work is needed to increase research value. A forthcoming meeting in Edinburgh, UK, will provide an initial forum within which to foster the collaboration neede

    STARD for Abstracts: Essential items for reporting diagnostic accuracy studies in journal or conference abstracts

    Get PDF
    Many abstracts of diagnostic accuracy studies are currently insufficiently informative. We extended the STARD (Standards for Reporting Diagnostic Accuracy) statement by developing a list of essential items that authors should consider when reporting diagnostic accuracy studies in journal or conference abstracts. After a literature review of published guidance for reporting biomedical studies, we identified 39 items potentially relevant to report in an abstract. We then selected essential items through a two round web based survey among the 85 members of the STARD Group, followed by discussions within an executive committee. Seventy three STARD Group members responded (86%), with 100% completion rate. STARD for Abstracts is a list of 11 quintessential items, to be reported in every abstract of a diagnostic accuracy study. We provide examples of complete reporting, and developed template text for writing informative abstract

    Electronic and animal noses for detecting SARS-CoV-2 infection (Protocol)

    Get PDF
    This is a protocol for a Cochrane Review (diagnostic). The objectives are as follows: 1. To assess the diagnostic test accuracy of eNoses to screen for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in public places, such as airports. 2. To assess the diagnostic test accuracy of sniffer animals, and more specifically dogs, to screen for SARS-CoV-2 infection in public places, such as airports. 3. To assess the diagnostic test accuracy of eNoses for SARS-CoV-2 infection or COVID-19 in symptomatic people presenting in the community, or in secondary care. 4. To assess the diagnostic test accuracy of sniffer animals, and more specifically dogs, for SARS-CoV-2 infection or COVID-19 in symptomatic people presenting in the community, or in secondary care

    Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies: The PRISMA-DTA Statement.

    Get PDF
    This is the final version of the article. Available from American Medical Association via the DOI in this record.Importance: Systematic reviews of diagnostic test accuracy synthesize data from primary diagnostic studies that have evaluated the accuracy of 1 or more index tests against a reference standard, provide estimates of test performance, allow comparisons of the accuracy of different tests, and facilitate the identification of sources of variability in test accuracy. Objective: To develop the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) diagnostic test accuracy guideline as a stand-alone extension of the PRISMA statement. Modifications to the PRISMA statement reflect the specific requirements for reporting of systematic reviews and meta-analyses of diagnostic test accuracy studies and the abstracts for these reviews. Design: Established standards from the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network were followed for the development of the guideline. The original PRISMA statement was used as a framework on which to modify and add items. A group of 24 multidisciplinary experts used a systematic review of articles on existing reporting guidelines and methods, a 3-round Delphi process, a consensus meeting, pilot testing, and iterative refinement to develop the PRISMA diagnostic test accuracy guideline. The final version of the PRISMA diagnostic test accuracy guideline checklist was approved by the group. Findings: The systematic review (produced 64 items) and the Delphi process (provided feedback on 7 proposed items; 1 item was later split into 2 items) identified 71 potentially relevant items for consideration. The Delphi process reduced these to 60 items that were discussed at the consensus meeting. Following the meeting, pilot testing and iterative feedback were used to generate the 27-item PRISMA diagnostic test accuracy checklist. To reflect specific or optimal contemporary systematic review methods for diagnostic test accuracy, 8 of the 27 original PRISMA items were left unchanged, 17 were modified, 2 were added, and 2 were omitted. Conclusions and Relevance: The 27-item PRISMA diagnostic test accuracy checklist provides specific guidance for reporting of systematic reviews. The PRISMA diagnostic test accuracy guideline can facilitate the transparent reporting of reviews, and may assist in the evaluation of validity and applicability, enhance replicability of reviews, and make the results from systematic reviews of diagnostic test accuracy studies more useful.The research was supported by grant 375751 from the Canadian Institute for Health Research; funding from the Canadian Agency for Drugs and Technologies in Health; funding from the Standards for Reporting of Diagnostic Accuracy Studies Group; funding from the University of Ottawa Department of Radiology Research Stipend Program; and funding from the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care South West Peninsula

    Epidemiology and reporting characteristics of preclinical systematic reviews

    Get PDF
    In an effort to better utilize published evidence obtained from animal experiments, systematic reviews of preclinical studies are increasingly more common-along with the methods and tools to appraise them (e.g., SYstematic Review Center for Laboratory animal Experimentation [SYRCLE's] risk of bias tool). We performed a cross-sectional study of a sample of recent preclinical systematic reviews (2015-2018) and examined a range of epidemiological characteristics and used a 46-item checklist to assess reporting details. We identified 442 reviews published across 43 countries in 23 different disease domains that used 26 animal species. Reporting of key details to ensure transparency and reproducibility was inconsistent across reviews and within article sections. Items were most completely reported in the title, introduction, and results sections of the reviews, while least reported in the methods and discussion sections. Less than half of reviews reported that a risk of bias assessment for internal and external validity was undertaken, and none reported methods for evaluating construct validity. Our results demonstrate that a considerable number of preclinical systematic reviews investigating diverse topics have been conducted; however, their quality of reporting is inconsistent. Our study provides the justification and evidence to inform the development of guidelines for conducting and reporting preclinical systematic reviews
    • …
    corecore