9 research outputs found

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Simulation-Based Comprehensive Cleft Care Workshops: A Reproducible Model for Sustainable Education

    No full text
    Objective: Evaluate simulation-based comprehensive cleft care workshops as a reproducible model for education with sustained impact. Design: Cross-sectional survey-based evaluation. Setting: Simulation-based comprehensive cleft care workshop. Participants: Total of 180 participants. Interventions: Three-day simulation-based comprehensive cleft care workshop. Main Outcome Measures: Number of workshop participants stratified by specialty, satisfaction with the workshop, satisfaction with simulation-based workshops as educational tools, impact on cleft surgery procedural confidence, short-term impact on clinical practice, medium-term impact on clinical practice. Results: The workshop included 180 participants from 5 continents. The response rate was 54.5%, with participants reporting high satisfaction with all aspects of the workshop and with simulation-based workshops as educational tools. Participants reported a significant improvement in cleft lip (33.3 ± 5.7 vs 25.7 ± 7.6; P <.001) and palate (32.4 ± 7.1 vs 23.7 ± 6.6; P <.001) surgery procedural confidence following the simulation sessions. Participants also reported a positive short-term and medium-term impact on their clinical practices. Conclusion: Simulation-based comprehensive cleft care workshops are well received by participants, lead to improved cleft surgery procedural confidence, and have a sustained positive impact on participants’ clinical practices. Future efforts should focus on evaluating and quantifying this perceived positive impact, as well reproducing these efforts in other areas of need

    The First Hybrid International Educational Comprehensive Cleft Care Workshop.

    Get PDF
    OBJECTIVE Describe the first hybrid global simulation-based comprehensive cleft care workshop, evaluate impact on participants, and compare experiences based on in-person versus virtual attendance. DESIGN Cross-sectional survey-based evaluation. SETTING International comprehensive cleft care workshop. PARTICIPANTS Total of 489 participants. INTERVENTIONS Three-day simulation-based hybrid comprehensive cleft care workshop. MAIN OUTCOME MEASURES Participant demographic data, perceived barriers and interventions needed for global comprehensive cleft care delivery, participant workshop satisfaction, and perceived short-term impact on practice stratified by in-person versus virtual attendance. RESULTS The workshop included 489 participants from 5 continents. The response rate was 39.9%. Participants perceived financial factors (30.3%) the most significant barrier and improvement in training (39.8%) as the most important intervention to overcome barriers facing cleft care delivery in low to middle-income countries. All participants reported a high level of satisfaction with the workshop and a strong positive perceived short-term impact on their practice. Importantly, while this was true for both in-person and virtual attendees, in-person attendees reported a significantly higher satisfaction with the workshop (28.63 ± 3.08 vs 27.63 ± 3.93; P = .04) and perceived impact on their clinical practice (22.37 ± 3.42 vs 21.02 ± 3.45 P = .01). CONCLUSION Hybrid simulation-based educational comprehensive cleft care workshops are overall well received by participants and have a positive perceived impact on their clinical practices. In-person attendance is associated with significantly higher satisfaction and perceived impact on practice. Considering that financial and health constraints may limit live meeting attendance, future efforts will focus on making in-person and virtual attendance more comparable

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press
    corecore