9 research outputs found

    Community Engagement in Academic Health Centers: A Model for Capturing and Advancing Our Successes

    Get PDF
    Academic health centers (AHCs) are under increased pressure to demonstrate the effectiveness of their community-engaged activities, but there are no common metrics for evaluating community engagement in AHCs. Eight AHCs piloted the Institutional Community Engagement Self-Assessment (ICESA), a two-phase project to assess community-engagement efforts. The first phase uses a framework developed by the University of Rochester Medical Center, which utilizes structure, process, and outcome criteria to map CE activities. The second phase uses the Community-Campus Partnerships for Health (CCPH) Self-Assessment to identify institutional resources for community engagement, and potential gaps, to inform community engagement goal-setting. The authors conducted a structured, directed content analysis to determine the effectiveness of using the two-phase process at the participating AHCs. The findings suggest that the ICESA project assisted AHCs in three key areas, and may provide a strategy for assessing community engagement in AHCs

    Community Engagement in Academic Health Centers: A Model for Capturing and Advancing Our Successes

    Get PDF
    Academic health centers (AHCs) are under increased pressure to demonstrate the effectiveness of their community-engaged activities, but there are no common metrics for evaluating community engagement in AHCs. Eight AHCs piloted the Institutional Community Engagement Self-Assessment (ICESA), a two-phase project to assess community-engagement efforts. The first phase uses a framework developed by the University of Rochester Medical Center, which utilizes structure, process, and outcome criteria to map CE activities. The second phase uses the Community-Campus Partnerships for Health (CCPH) Self-Assessment to identify institutional resources for community engagement, and potential gaps, to inform community engagement goal-setting. The authors conducted a structured, directed content analysis to determine the effectiveness of using the two-phase process at the participating AHCs. The findings suggest that the ICESA project assisted AHCs in three key areas, and may provide a strategy for assessing community engagement in AHCs

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Revisiting Jullien in an era of globalisation

    Get PDF
    In this paper, we discuss some of the ways in which forces of globalisation have transformed the spaces in which educational policies are now developed and practices now enacted. We will consider further the widely held claim that the emergence of these transnational spaces requires new ways of thinking about comparative education. We will examine this claim, referring in particular to the questions proposed by Jullien almost two centuries ago. Taking these questions as a starting point, we will reflect on their usefulness in understanding contemporary developments in education and discuss what kind of theoretical and methodological approaches are needed to address these questions in an era of globalisation.Fil: Beech, Jason. Universidad de San Andres; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Rizvi, Fazal Abbas. The University Of Melbourne; Australi

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press
    corecore