7,426 research outputs found

    Optimal search strategies for identifying sound clinical prediction studies in EMBASE

    Get PDF
    BACKGROUND: Clinical prediction guides assist clinicians by pointing to specific elements of the patient's clinical presentation that should be considered when forming a diagnosis, prognosis or judgment regarding treatment outcome. The numbers of validated clinical prediction guides are growing in the medical literature, but their retrieval from large biomedical databases remains problematic and this presents a barrier to their uptake in medical practice. We undertook the systematic development of search strategies ("hedges") for retrieval of empirically tested clinical prediction guides from EMBASE. METHODS: An analytic survey was conducted, testing the retrieval performance of search strategies run in EMBASE against the gold standard of hand searching, using a sample of all 27,769 articles identified in 55 journals for the 2000 publishing year. All articles were categorized as original studies, review articles, general papers, or case reports. The original and review articles were then tagged as 'pass' or 'fail' for methodologic rigor in the areas of clinical prediction guides and other clinical topics. Search terms that depicted clinical prediction guides were selected from a pool of index terms and text words gathered in house and through request to clinicians, librarians and professional searchers. A total of 36,232 search strategies composed of single and multiple term phrases were trialed for retrieval of clinical prediction studies. The sensitivity, specificity, precision, and accuracy of search strategies were calculated to identify which were the best. RESULTS: 163 clinical prediction studies were identified, of which 69 (42.3%) passed criteria for scientific merit. A 3-term strategy optimized sensitivity at 91.3% and specificity at 90.2%. Higher sensitivity (97.1%) was reached with a different 3-term strategy, but with a 16% drop in specificity. The best measure of specificity (98.8%) was found in a 2-term strategy, but with a considerable fall in sensitivity to 60.9%. All single term strategies performed less well than 2- and 3-term strategies. CONCLUSION: The retrieval of sound clinical prediction studies from EMBASE is supported by several search strategies

    Optimal search strategies for identifying mental health content in MEDLINE: an analytic survey

    Get PDF
    OBJECTIVE: General practitioners, mental health practitioners, and researchers wishing to retrieve the best current research evidence in the content area of mental health may have a difficult time when searching large electronic databases such as MEDLINE. When MEDLINE is searched unaided, key articles are often missed while retrieving many articles that are irrelevant to the search. The objectives of this study were to develop optimal search strategies to detect articles with mental health content and to determine the effect of combining mental health content search strategies with methodologic search strategies calibrated to detect the best studies of treatment. METHOD: An analytic survey was conducted, comparing hand searches of 29 journals with retrievals from MEDLINE for 3,395 candidate search terms and 11,317 combinations. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. RESULTS: 3,277 (26.8%) of the 12,233 articles classified in the 29 journals were considered to be of interest to the discipline area of mental health. Search term combinations reached peak sensitivities of 98.4% with specificity at 50.0%, whereas combinations of search terms to optimize specificity reached peak specificities of 97.1% with sensitivity at 51.7%. Combining content search strategies with methodologic search strategies for treatment led to improved precision: substantive decreases in the number of articles that needed to be sorted through in order to find target articles. CONCLUSION: Empirically derived search strategies can achieve high sensitivity and specificity for retrieving mental health content from MEDLINE. Combining content search strategies with methodologic search strategies led to more precise searches

    An overview of the design and methods for retrieving high-quality studies for clinical care

    Get PDF
    BACKGROUND: With the information explosion, the retrieval of the best clinical evidence from large, general purpose, bibliographic databases such as MEDLINE can be difficult. Both researchers conducting systematic reviews and clinicians faced with a patient care question are confronted with the daunting task of searching for the best medical literature in electronic databases. Many have advocated the use of search filters or "hedges" to assist with the searching process. The purpose of this report is to describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronics databases. OBJECTIVE: To describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronic databases. DESIGN: An analytic survey comparing hand searches of 170 journals in the year 2000 with retrievals from MEDLINE, EMBASE, CINAHL, and PsycINFO for candidate search terms and combinations. The sensitivity, specificity, precision, and accuracy of unique search terms and combinations of search terms were calculated. CONCLUSION: A study design modeled after a diagnostic testing procedure with a gold standard (the hand search of the literature) and a test (the search terms) is an effective way of developing, testing, and validating search strategies for use in large electronic databases

    EMBASE search strategies for identifying methodologically sound diagnostic studies for use by clinicians and researchers

    Get PDF
    BACKGROUND: Accurate diagnosis by clinicians is the cornerstone of decision making for recommending clinical interventions. The current best evidence from research concerning diagnostic tests changes unpredictably as science advances. Both clinicians and researchers need dependable access to published evidence concerning diagnostic accuracy. Bibliographic databases such as EMBASE provide the most widely available entrée to this literature. The objective of this study was to develop search strategies that optimize the retrieval of methodologically sound diagnostic studies from EMBASE for use by clinicians. METHODS: An analytic survey was conducted, comparing hand searches of 55 journals with retrievals from EMBASE for 4,843 candidate search terms and 6,574 combinations. All articles were rated using purpose and quality indicators, and clinically relevant diagnostic accuracy articles were categorized as 'pass' or 'fail' according to explicit criteria for scientific merit. Candidate search strategies were run in EMBASE, the retrievals being compared with the hand search data. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." The sensitivity, specificity, precision and accuracy of the search strategies were calculated. RESULTS: Of the 433 articles about diagnostic tests, 97 (22.4%) met basic criteria for scientific merit. Combinations of search terms reached peak sensitivities of 100% with specificity at 70.4%. Compared with best single terms, best multiple terms increased sensitivity for sound studies by 8.2% (absolute increase), but decreased specificity (absolute decrease 6%) when sensitivity was maximized. When terms were combined to maximize specificity, the single term "specificity.tw." (specificity of 98.2%) outperformed combinations of terms. CONCLUSION: Empirically derived search strategies combining indexing terms and textwords can achieve high sensitivity and specificity for retrieving sound diagnostic studies from EMBASE. These search filters will enhance the searching efforts of clinicians

    Sample size determination for bibliographic retrieval studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies.</p> <p>Methods</p> <p>The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset.</p> <p>Results</p> <p>For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%).</p> <p>Conclusion</p> <p>The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach.</p

    Developing search strategies for clinical practice guidelines in SUMSearch and Google Scholar and assessing their retrieval performance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information overload, increasing time constraints, and inappropriate search strategies complicate the detection of clinical practice guidelines (CPGs). The aim of this study was to provide clinicians with recommendations for search strategies to efficiently identify relevant CPGs in SUMSearch and Google Scholar.</p> <p>Methods</p> <p>We compared the retrieval efficiency (retrieval performance) of search strategies to identify CPGs in SUMSearch and Google Scholar. For this purpose, a two-term GLAD (GuideLine And Disease) strategy was developed, combining a defined CPG term with a specific disease term (MeSH term). We used three different CPG terms and nine MeSH terms for nine selected diseases to identify the most efficient GLAD strategy for each search engine. The retrievals for the nine diseases were pooled. To compare GLAD strategies, we used a manual review of all retrievals as a reference standard. The CPGs detected had to fulfil predefined criteria, e.g., the inclusion of therapeutic recommendations. Retrieval performance was evaluated by calculating so-called diagnostic parameters (sensitivity, specificity, and "Number Needed to Read" [NNR]) for search strategies.</p> <p>Results</p> <p>The search yielded a total of 2830 retrievals; 987 (34.9%) in Google Scholar and 1843 (65.1%) in SUMSearch. Altogether, we found 119 unique and relevant guidelines for nine diseases (reference standard). Overall, the GLAD strategies showed a better retrieval performance in SUMSearch than in Google Scholar. The performance pattern between search engines was similar: search strategies including the term "guideline" yielded the highest sensitivity (SUMSearch: 81.5%; Google Scholar: 31.9%), and search strategies including the term "practice guideline" yielded the highest specificity (SUMSearch: 89.5%; Google Scholar: 95.7%), and the lowest NNR (SUMSearch: 7.0; Google Scholar: 9.3).</p> <p>Conclusion</p> <p>SUMSearch is a useful tool to swiftly gain an overview of available CPGs. Its retrieval performance is superior to that of Google Scholar, where a search is more time consuming, as substantially more retrievals have to be reviewed to detect one relevant CPG. In both search engines, the CPG term "guideline" should be used to obtain a comprehensive overview of CPGs, and the term "practice guideline" should be used if a less time consuming approach for the detection of CPGs is desired.</p

    Optimal search strategies for detecting cost and economic studies in EMBASE

    Get PDF
    BACKGROUND: Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. METHODS: We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics) and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. RESULTS: Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity) increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms for economic studies saw no change in sensitivity from the single term and only a 0.1% increase in specificity. CONCLUSION: Selected terms have excellent performance in the retrieval of studies of health costs and economics from EMBASE

    Systematic review identifies six metrics and one method for assessing literature search effectiveness but no consensus on appropriate use

    Get PDF
    Objective To identify the metrics or methods used by researchers to determine the effectiveness of literature searching where supplementary search methods are compared to bibliographic database searching. We also aimed to determine which metrics or methods are summative or formative and how researchers defined effectiveness in their studies. Study Design and Setting Systematic review. We searched MEDLINE and EMBASE to identify published studies evaluating literature search effectiveness in health or allied topics. Results Fifty studies met full-text inclusion criteria. Six metrics (Sensitivity, Specificity, Precision, Accuracy, Number Needed to Read and Yield) and one method (Capture recapture) were identified. Conclusion Studies evaluating effectiveness need to identify clearly the threshold at which they will define effectiveness and how the evaluation they report relates to this threshold. Studies that attempt to investigate literature search effectiveness should be informed by the reporting of confidence intervals, which aids interpretation of uncertainty around the result, and the search methods used to derive effectiveness estimates should be clearly reported and validated in studies
    corecore