138 research outputs found

    Resource use during systematic review production varies widely: a scoping review

    Get PDF
    Objective: We aimed to map the resource use during systematic review (SR) production and reasons why steps of the SR production are resource intensive to discover where the largest gain in improving efficiency might be possible. Study design and setting: We conducted a scoping review. An information specialist searched multiple databases (e.g., Ovid MEDLINE, Scopus) and implemented citation-based and grey literature searching. We employed dual and independent screenings of records at the title/abstract and full-text levels and data extraction. Results: We included 34 studies. Thirty-two reported on the resource use—mostly time; four described reasons why steps of the review process are resource intensive. Study selection, data extraction, and critical appraisal seem to be very resource intensive, while protocol development, literature search, or study retrieval take less time. Project management and administration required a large proportion of SR production time. Lack of experience, domain knowledge, use of collaborative and SR-tailored software, and good communication and management can be reasons why SR steps are resource intensive. Conclusion: Resource use during SR production varies widely. Areas with the largest resource use are administration and project management, study selection, data extraction, and critical appraisal of studies.European Commission CA17117Danube University Krem

    Delphi survey on the most promising areas and methods to improve systematic reviews' production and updating

    Get PDF
    BACKGROUND: Systematic reviews (SRs) are invaluable evidence syntheses, widely used in biomedicine and other scientific areas. Tremendous resources are being spent on the production and updating of SRs. There is a continuous need to automatize the process and use the workforce and resources to make it faster and more efficient. METHODS: Information gathered by previous EVBRES research was used to construct a questionnaire for round 1 which was partly quantitative, partly qualitative. Fifty five experienced SR authors were invited to participate in a Delphi study (DS) designed to identify the most promising areas and methods to improve the efficient production and updating of SRs. Topic questions focused on which areas of SRs are most time/effort/resource intensive and should be prioritized in further research. Data were analysed using NVivo 12 plus, Microsoft Excel 2013 and SPSS. Thematic analysis findings were used on the topics on which agreement was not reached in round 1 in order to prepare the questionnaire for round 2. RESULTS: Sixty percent (33/55) of the invited participants completed round 1; 44% (24/55) completed round 2. Participants reported average of 13.3 years of experience in conducting SRs (SD 6.8). More than two thirds of the respondents agreed/strongly agreed the following topics should be prioritized: extracting data, literature searching, screening abstracts, obtaining and screening full texts, updating SRs, finding previous SRs, translating non-English studies, synthesizing data, project management, writing the protocol, constructing the search strategy and critically appraising. Participants have not considered following areas as priority: snowballing, GRADE-ing, writing SR, deduplication, formulating SR question, performing meta-analysis. CONCLUSIONS: Data extraction was prioritized by the majority of participants as an area that needs more research/methods development. Quality of available language translating tools has dramatically increased over the years (Google translate, DeepL). The promising new tool for snowballing emerged (Citation Chaser). Automation cannot substitute human judgement where complex decisions are needed (GRADE-ing). TRIAL REGISTRATION: Study protocol was registered at https://osf.io/bp2hu/

    Searching two or more databases decreased the risk of missing relevant studies: a metaresearch study

    Get PDF
    BACKGROUND AND OBJECTIVES: Assessing changes in coverage, recall, review, conclusions and references not found when searching fewer databases. METHODS: In randomly selected 60 Cochrane reviews, we checked included study publications' coverage (indexation) and recall (findability) using different search approaches with MEDLINE, Embase, and CENTRAL and related them to authors' conclusions and certainty. We assessed characteristics of unfound references. RESULTS: Overall 1989/2080 included references, were indexed in ≥1 database (coverage = 96%). In reviews where using one of our search approaches would not change conclusions and certainty (n = 44-54), median coverage and recall were highest (range 87.9%-100.0% and 78.2%-93.3%, respectively). Here, searching ≥2 databases reached >95% coverage and ≥87.9% recall. In reviews with unchanged conclusions but less certainty (n = 2-8): 63.3%-79.3% coverage and 45.0%-75.0% recall. In reviews with opposite conclusions (n = 1-3): 63.3%-96.6% and 52.1%-78.7%. In reviews where a conclusion was no longer possible (n = 3-7): 60.6%-86.0% and 20.0%-53.8%. The 265 references that were indexed but unfound were more often abstractless (30% vs. 11%) and older (28% vs. 17% published before 1991) than found references. CONCLUSION: Searching ≥2 databases improves coverage and recall and decreases the risk of missing eligible studies. If researchers suspect that relevant articles are difficult to find, supplementary search methods should be used

    Current methods for development of rapid reviews about diagnostic tests: an international survey

    Get PDF
    Background Rapid reviews (RRs) have emerged as an efficient alternative to time-consuming systematic reviews—they can help meet the demand for accelerated evidence synthesis to inform decision-making in healthcare. The synthesis of diagnostic evidence has important methodological challenges. Here, we performed an international survey to identify the current practice of producing RRs for diagnostic tests. Methods We developed and administered an online survey inviting institutions that perform RRs of diagnostic tests from all over the world. Results All participants (N = 25) reported the implementation of one or more methods to define the scope of the RR; however, only one strategy (defining a structured question) was used by ≥90% of participants. All participants used at least one methodological shortcut including the use of a previous review as a starting point (92%) and the use of limits on the search (96%). Parallelization and automation of review tasks were not extensively used (48 and 20%, respectively). Conclusion Our survey indicates a greater use of shortcuts and limits for conducting diagnostic test RRs versus the results of a recent scoping review analyzing published RRs. Several shortcuts are used without knowing how their implementation affects the results of the evidence synthesis in the setting of diagnostic test reviews. Thus, a structured evaluation of the challenges and implications of the adoption of these RR methods is warranted

    Abbreviated and comprehensive literature searches led to identical or very similar effect estimates: a meta-epidemiological study

    Get PDF
    OBJECTIVES The objective of this study was to assess the agreement of treatment effect estimates from meta-analyses based on abbreviated or comprehensive literature searches. STUDY DESIGN AND SETTING This was a meta-epidemiological study. We abbreviated 47 comprehensive Cochrane review searches and searched MEDLINE/Embase/CENTRAL alone, in combination, with/without checking references (658 new searches). We compared one meta-analysis from each review with recalculated ones based on abbreviated searches. RESULTS The 47 original meta-analyses included 444 trials (median 6 per review interquartile range (IQR) 3-11) with 360045 participants (median 1,371 per review IQR 685-8,041). Depending on the search approach, abbreviated searches led to identical effect estimates in 34-79{\%} of meta-analyses, to different effect estimates with the same direction and level of statistical significance in 15-51{\%}, and to opposite effects (or effects could not be estimated anymore) in 6-13{\%}. The deviation of effect sizes was zero in 50{\%} of the meta-analyses and in 75{\%} not larger than 1.07-fold. Effect estimates of abbreviated searches were not consistently smaller or larger (median ratio of odds ratio 1 IQR 1-1.01) but more imprecise (1.02-1.06-fold larger standard errors). CONCLUSION Abbreviated literature searches often led to identical or very similar effect estimates as comprehensive searches with slightly increased confidence intervals. Relevant deviations may occur

    Challenges of rapid reviews for diagnostic test accuracy questions: a protocol for an international survey and expert consultation

    Get PDF
    Background: Assessment of diagnostic tests, broadly defined as any element that aids in the collection of additional information for further clarification of a patient’s health status, has increasingly become a critical issue in health policy and decision-making. Diagnostic evidence, including the accuracy of a medical test for a target condition, is commonly appraised using standard systematic review methodology. Owing to the considerable time and resources required to conduct these, rapid reviews have emerged as a pragmatic alternative by tailoring methods according to the decision maker’s circumstances. However, it is not known if streamlining methodological aspects has an impact on the validity of evidence synthesis. Furthermore, due to the particular nature and complexity of the appraisal of diagnostic accuracy, there is need for detailed guidance on how to conduct rapid reviews of diagnostic tests. In this study, we aim to identify the methods currently used by rapid review developers to synthesize evidence on diagnostic test accuracy, as well as to analyze potential shortcomings and challenges related to these methods. Methods: We will carry out a two-fold approach: (1) an international survey of professionals working in organizations that develop rapid reviews of diagnostic tests, in terms of the methods and resources used by these agencies when conducting rapid reviews, and (2) semi-structured interviews with senior-level individuals to further explore and validate the findings from the survey and to identify challenges in conducting rapid reviews. We will use STATA 15.0 for quantitative analyses and framework analysis for qualitative analyses. We will ensure protection of data during all stages. Discussion: The main result of this research will be a map of methods and resources currently used for conducting rapid reviews of diagnostic test accuracy, as well as methodological shortcomings and potential solutions in diagnostic knowledge synthesis that require further research

    How to develop rapid reviews of diagnostic tests according to experts: A qualitative exploration of researcher views

    Get PDF
    Background: Rapid reviews (RRs) have been used to provide timely evidence for policymakers, health providers, and the public in several healthcare scenarios, most recently during the coronavirus disease 2019 pandemic. Despite the essential role of diagnosis in clinical management, data about how to perform RRs of diagnostic tests are scarce. We aimed to explore the views and perceptions of experts in evidence synthesis and diagnostic evidence about the value of methods used to accelerate the review process. Methods: We performed semistructured interviews with a purposive sample of experts in evidence synthesis and diagnostic evidence. We carried out the interviews in English between July and December 2021. Initial reading and coding of the transcripts were performed using NVIVO qualitative data analysis software. Results: Of a total of 23 invited experts, 16 (70%) responded. We interviewed all 16 participants representing key roles in evidence synthesis. We identified 14 recurring themes including the review question, characteristics of the review team, and use of automation, as the topics with the highest number of quotes. Some participants considered several methodological “shortcuts” to be ineffective or risky, such as automating quality appraisal, using only one reviewer for diagnostic data extraction and only performing descriptive analysis. The introduction of limits might depend on whether the test being assessed is a new test, the availability of alternative tests, the needs of providers and patients, and the availability of high‐quality systematic reviews. Conclusions: Our findings suggest that organizational strategies (e.g., defining the review question, availability of a highly experienced team) may have a role in conducting RRs of diagnostic tests. Several methodological shortcuts were considered inadequate for accelerating the review process, though they need to be assessed in well‐designed studies. Improved reporting of RRs would support evidence‐based decision‐making and help users of RRs understand their limitations

    Delphi survey on the most promising areas and methods to improve systematic reviews' production and updating

    Get PDF
    Background: Systematic reviews (SRs) are invaluable evidence syntheses, widely used in biomedicine and other scientific areas. Tremendous resources are being spent on the production and updating of SRs. There is a continuous need to automatize the process and use the workforce and resources to make it faster and more efficient.Methods: Information gathered by previous EVBRES research was used to construct a questionnaire for round 1 which was partly quantitative, partly qualitative. Fifty five experienced SR authors were invited to participate in a Del‑ phi study (DS) designed to identify the most promising areas and methods to improve the efficient production and updating of SRs. Topic questions focused on which areas of SRs are most time/effort/resource intensive and should be prioritized in further research. Data were analysed using NVivo 12 plus, Microsoft Excel 2013 and SPSS. Thematic analysis findings were used on the topics on which agreement was not reached in round 1 in order to prepare the questionnaire for round 2.Results: Sixty percent (33/55) of the invited participants completed round 1; 44% (24/55) completed round 2. Participants reported average of 13.3 years of experience in conducting SRs (SD 6.8). More than two thirds of the respondents agreed/strongly agreed the following topics should be prioritized: extracting data, literature searching, screen‑ ing abstracts, obtaining and screening full texts, updating SRs, finding previous SRs, translating non-English studies, synthesizing data, project management, writing the protocol, constructing the search strategy and critically appraising. Participants have not considered following areas as priority: snowballing, GRADE-ing, writing SR, deduplication, formulating SR question, performing meta-analysis.Conclusions: Data extraction was prioritized by the majority of participants as an area that needs more research/ methods development. Quality of available language translating tools has dramatically increased over the years (Google translate, DeepL). The promising new tool for snowballing emerged (Citation Chaser). Automation cannot substitute human judgement where complex decisions are needed (GRADE-ing). Trial registration Study protocol was registered at https://osf.io/bp2hu/peer-reviewe

    Workplace interventions to reduce the risk of SARS-CoV-2 infection outside of healthcare settings

    Get PDF
    This is a protocol for a Cochrane Review (intervention). The objectives are as follows: To assess the benefits and harms of interventions in non‐healthcare‐related workplaces to reduce the risk of SARS‐CoV‐2 infection relative to other interventions or no intervention

    Resource use during systematic review production varies widely : a scoping review

    Get PDF
    Objective: We aimed to map the resource use during systematic review (SR) production and reasons why steps of the SR production are resource intensive to discover where the largest gain in improving efficiency might be possible.Study design and setting: We conducted a scoping review. An information specialist searched multiple databases (e.g., Ovid MEDLINE, Scopus) and implemented citation-based and grey literature searching. We employed dual and independent screenings of records at the title/abstract and full-text levels and data extraction.Results: We included 34 studies. Thirty-two reported on the resource use-mostly time; four described reasons why steps of the review process are resource intensive. Study selection, data extraction, and critical appraisal seem to be very resource intensive, while protocol development, literature search, or study retrieval take less time. Project management and administration required a large proportion of SR production time. Lack of experience, domain knowledge, use of collaborative and SR-tailored software, and good communication and management can be reasons why SR steps are resource intensive.Conclusion: Resource use during SR production varies widely. Areas with the largest resource use are administration and project management, study selection, data extraction, and critical appraisal of studies.peer-reviewe
    corecore