4 research outputs found

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    EFSUMB Clinical Practice Guidelines for Point-of-Care Ultrasound:Part One (Common Heart and Pulmonary Applications) SHORT VERSION

    Get PDF
    Aims To evaluate the evidence and produce a summary and recommendations for the most common heart and lung applications of point-of-care ultrasound (PoCUS). Methods We reviewed 10 clinical domains/questions related to common heart and lung applications of PoCUS. Following review of the evidence, a summary and recommendation were produced, including assignment of levels of evidence (LoE) and grading of the recommendation, assessment, development, and evaluation (GRADE). 38 international experts, the expert review group (ERG), were invited to review the evidence presented for each question. A level of agreement of over 75 % was required to progress to the next section. The ERG then reviewed and indicated their level of agreement regarding the summary and recommendation for each question (using a 5-point Likert scale), which was approved if a level of agreement of greater than 75 % was reached. A level of agreement was defined as a summary of “strongly agree” and “agree” on the Likert scale responses. Findings and Recommendations One question achieved a strong consensus for an assigned LoE of 3 and a weak GRADE recommendation (question 1). The remaining 9 questions achieved broad agreement with one assigned an LoE of 4 and weak GRADE recommendation (question 2), three achieving an LoE of 3 with a weak GRADE recommendation (questions 3–5), three achieved an LoE of 3 with a strong GRADE recommendation (questions 6–8), and the remaining two were assigned an LoE of 2 with a strong GRADE recommendation (questions 9 and 10). Conclusion These consensus-derived recommendations should aid clinical practice and highlight areas of further research for PoCUS in acute settings

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press
    corecore