3 research outputs found

    Analysis of queries sent to PubMed at the point of care: Observation of search behaviour in a medical teaching hospital

    Get PDF
    Contains fulltext : 69801.pdf ( ) (Open Access)BACKGROUND: The use of PubMed to answer daily medical care questions is limited because it is challenging to retrieve a small set of relevant articles and time is restricted. Knowing what aspects of queries are likely to retrieve relevant articles can increase the effectiveness of PubMed searches. The objectives of our study were to identify queries that are likely to retrieve relevant articles by relating PubMed search techniques and tools to the number of articles retrieved and the selection of articles for further reading. METHODS: This was a prospective observational study of queries regarding patient-related problems sent to PubMed by residents and internists in internal medicine working in an Academic Medical Centre. We analyzed queries, search results, query tools (Mesh, Limits, wildcards, operators), selection of abstract and full-text for further reading, using a portal that mimics PubMed. RESULTS: PubMed was used to solve 1121 patient-related problems, resulting in 3205 distinct queries. Abstracts were viewed in 999 (31%) of these queries, and in 126 (39%) of 321 queries using query tools. The average term count per query was 2.5. Abstracts were selected in more than 40% of queries using four or five terms, increasing to 63% if the use of four or five terms yielded 2-161 articles. CONCLUSION: Queries sent to PubMed by physicians at our hospital during daily medical care contain fewer than three terms. Queries using four to five terms, retrieving less than 161 article titles, are most likely to result in abstract viewing. PubMed search tools are used infrequently by our population and are less effective than the use of four or five terms. Methods to facilitate the formulation of precise queries, using more relevant terms, should be the focus of education and research

    Answers to Questions Posed During Daily Patient Care Are More Likely to Be Answered by UpToDate Than PubMed

    Get PDF
    Contains fulltext : 69818.pdf (publisher's version ) (Open Access)BACKGROUND: UpToDate and PubMed are popular sources for medical information. Data regarding the efficiency of PubMed and UpToDate in daily medical care are lacking. OBJECTIVE: The purpose of this observational study was to describe the percentage of answers retrieved by these information sources, comparing search results with regard to different medical topics and the time spent searching for an answer. METHODS: A total of 40 residents and 30 internists in internal medicine working in an academic medical center searched PubMed and UpToDate using an observation portal during daily medical care. The information source used for searching and the time needed to find an answer to the question were recorded by the portal. Information was provided by searchers regarding the topic of the question, the situation that triggered the question, and whether an answer was found. RESULTS: We analyzed 1305 patient-related questions sent to PubMed and/or UpToDate between October 1, 2005 and March 31, 2007 using our portal. A complete answer was found in 594/1125 (53%) questions sent to PubMed or UpToDate. A partial or full answer was obtained in 729/883 (83%) UpToDate searches and 152/242 (63%) PubMed searches (P < .001). UpToDate answered more questions than PubMed on all major medical topics, but a significant difference was detected only when the question was related to etiology (P < .001) or therapy (P = .002). Time to answer was 241 seconds (SD 24) for UpToDate and 291 seconds (SD 7) for PubMed. CONCLUSIONS: Specialists and residents in internal medicine generally use less than 5 minutes to answer patient-related questions in daily care. More questions are answered using UpToDate than PubMed on all major medical topics

    Recommendations for a uniform assessment of publication bias related to funding source

    Get PDF
    Contains fulltext : 125836.pdf (publisher's version ) (Open Access)BACKGROUND: Numerous studies on publication bias in clinical drug research have been undertaken, particularly on the association between sponsorship and favourable outcomes. However, no standardized methodology for the classification of outcomes and sponsorship has been described. Dissimilarities and ambiguities in this assessment impede the ability to compare and summarize results of studies on publication bias. To guide authors undertaking such studies, this paper provides recommendations for a uniform assessment of publication bias related to funding source. METHODS AND RESULTS: As part of ongoing research into publication bias, 472 manuscripts on randomised controlled trials (RCTs) with drugs, submitted to eight medical journals from January 2010 through April 2012, were reviewed. Information on trial results and sponsorship was extracted from manuscripts. During the start of this evaluation, several problems related to the classification of outcomes, inclusion of post-hoc analyses and follow-up studies of RCTs in the study sample, and assessment of the role of the funding source were encountered. A comprehensive list of recommendations addressing these problems was composed. To assess internal validity, reliability and usability of these recommendations were tested through evaluation of manuscripts submitted to journals included in our study. CONCLUSIONS: The proposed recommendations represent a first step towards a uniform method of classifying trial outcomes and sponsorship. This is essential to draw valid conclusions on the role of the funding source in publication bias and will ensure consistency across future studies
    corecore