9 research outputs found

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Characterization of the significant decline in humoral immune response six months post‐SARS‐CoV‐2 mRNA vaccination: A systematic review

    No full text
    Accumulating evidence shows a progressive decline in the efficacy of coronavirus disease 2019 (SARS-CoV-2) mRNA vaccines such as Pfizer-BioNTech (mRNA BNT161b2) and Moderna (mRNA-1273) in preventing breakthrough infections due to diminishing humoral immunity over time. Thus, this review characterizes the kinetics of anti-SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) antibodies after the second dose of a primary cycle of COVID-19 mRNA vaccination. A systematic search of literature was performed and a total of 18 articles (N=15,980 participants) were identified and reviewed. The percent difference of means of reported antibody titers were then calculated to determine the decline in humoral response after the peak levels post-vaccination. Findings revealed that the peak humoral response was reached at 21-28 days after the second dose, after which serum levels progressively diminished at 4-6 months post-vaccination. Additionally, results showed that regardless of age, sex, serostatus and presence of comorbidities, longitudinal data reporting antibody measurement exhibited a decline of both anti-receptor binding domain (RBD) IgG and anti-spike IgG, ranging from 94-95% at 90-180 days and 55-85% at 140-160 days, respectively, after the peak antibody response. This suggests that the rate of antibody decline may be independent of patient-related factors and peak antibody titers but mainly a function of time and antibody class/molecular target. Hence, this study highlights the necessity of more efficient vaccination strategies to provide booster administration in attenuating the effects of waning immunity, especially in the appearance of new variants of concerns (VoCs). This article is protected by copyright

    Capturing all disease-causing mutations for clinical and research use: toward an effortless system for the Human Variome Project

    No full text
    The collection of genetic variants that cause inherited disease (causative mutation) has occurred for decades albeit in an ad hoc way, for research and clinical purposes. More recently, the access to collections of mutations causing specific diseases has become essential for appropriate genetic health care. Because information has accumulated, it has become apparent that there are many gaps in our ability to correctly annotate all the changes that are being identified at ever increasing rates. The Human Variome Project (www.humanvariomeproject.org) was initiated to facilitate integrated and systematic collection and access to this data. This manuscript discusses how collection of such data may be facilitated through new software and strategies in the clinical genetics and diagnostic laboratory communities

    How to catch all those mutations-the report of the Third Human Variome Project Meeting, UNESCO Paris, May 2010

    No full text
    International audienceThe third Human Variome Project (HVP) Meeting "Integration and Implementation" was held under UNESCO Patronage in Paris, France, at the UNESCO Headquarters May 10-14, 2010. The major aims of the HVP are the collection, curation, and distribution of all human genetic variation affecting health. The HVP has drawn together disparate groups, by country, gene of interest, and expertise, who are working for the common good with the shared goal of pushing the boundaries of the human variome and collaborating to avoid unnecessary duplication. The meeting addressed the 12 key areas that form the current framework of HVP activities: Ethics; Nomenclature and Standards; Publication, Credit and Incentives; Data Collection from Clinics; Overall Data Integration and Access-Peripheral Systems/Software; Data Collection from Laboratories; Assessment of Pathogenicity; Country Specific Collection; Translation to Healthcare and Personalized Medicine; Data Transfer, Databasing, and Curation; Overall Data Integration and Access-Central Systems; and Funding Mechanisms and Sustainability. In addition, three societies that support the goals and the mission of HVP also held their own Workshops with the view to advance disease-specific variation data collection and utilization: the International Society for Gastrointestinal Hereditary Tumours, the Micronutrient Genomics Project, and the Neurogenetics Consortium

    Planning the Human Variome Project: The Spain report.

    No full text
    The remarkable progress in characterizing the human genome sequence, exemplified by the Human Genome Project and the HapMap Consortium, has led to the perception that knowledge and the tools (e.g., microarrays) are sufficient for many if not most biomedical research efforts. A large amount of data from diverse studies proves this perception inaccurate at best, and at worst, an impediment for further efforts to characterize the variation in the human genome. Because variation in genotype and environment are the fundamental basis to understand phenotypic variability and heritability at the population level, identifying the range of human genetic variation is crucial to the development of personalized nutrition and medicine. The Human Variome Project (HVP; http://www.humanvariomeproject.org/) was proposed initially to systematically collect mutations that cause human disease and create a cyber infrastructure to link locus specific databases (LSDB). We report here the discussions and recommendations from the 2008 HVP planning meeting held in San Feliu de Guixols Spain, in May 2008. Hum Mutat 30, 496-510, 2009. (C) 2009 Wiley-Liss, Incclose31333

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press
    corecore