3 research outputs found
Evidence Map of Pancreatic Surgery–A living systematic review with meta-analyses by the International Study Group of Pancreatic Surgery (ISGPS)
Background: Pancreatic surgery is associated with considerable morbidity and, consequently, offers a large and complex field for research. To prioritize relevant future scientific projects, it is of utmost importance to identify existing evidence and uncover research gaps. Thus, the aim of this project was to create a systematic and living Evidence Map of Pancreatic Surgery. Methods: PubMed, the Cochrane Central Register of Controlled Trials, and Web of Science were systematically searched for all randomized controlled trials and systematic reviews on pancreatic surgery. Outcomes from every existing randomized controlled trial were extracted, and trial quality was assessed. Systematic reviews were used to identify an absence of randomized controlled trials. Randomized controlled trials and systematic reviews on identical subjects were grouped according to research topics. A web-based evidence map modeled after a mind map was created to visualize existing evidence. Meta-analyses of specific outcomes of pancreatic surgery were performed for all research topics with more than 3 randomized controlled trials. For partial pancreatoduodenectomy and distal pancreatectomy, pooled benchmarks for outcomes were calculated with a 99% confidence interval. The evidence map undergoes regular updates. Results: Out of 30, 860 articles reviewed, 328 randomized controlled trials on 35, 600 patients and 332 systematic reviews were included and grouped into 76 research topics. Most randomized controlled trials were from Europe (46%) and most systematic reviews were from Asia (51%). A living meta-analysis of 21 out of 76 research topics (28%) was performed and included in the web-based evidence map. Evidence gaps were identified in 11 out of 76 research topics (14%). The benchmark for mortality was 2% (99% confidence interval: 1%–2%) for partial pancreatoduodenectomy and <1% (99% confidence interval: 0%–1%) for distal pancreatectomy. The benchmark for overall complications was 53% (99%confidence interval: 46%–61%) for partial pancreatoduodenectomy and 59% (99% confidence interval: 44%–80%) for distal pancreatectomy. Conclusion: The International Study Group of Pancreatic Surgery Evidence Map of Pancreatic Surgery, which is freely accessible via www.evidencemap.surgery and as a mobile phone app, provides a regularly updated overview of the available literature displayed in an intuitive fashion. Clinical decision making and evidence-based patient information are supported by the primary data provided, as well as by living meta-analyses. Researchers can use the systematic literature search and processed data for their own projects, and funding bodies can base their research priorities on evidence gaps that the map uncovers. © 2021 The Author
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical science. © The Author(s) 2019. Published by Oxford University Press