2,784 research outputs found

    Generating Natural Language Queries for More Effective Systematic Review Screening Prioritisation

    Full text link
    Screening prioritisation in medical systematic reviews aims to rank the set of documents retrieved by complex Boolean queries. The goal is to prioritise the most important documents so that subsequent review steps can be carried out more efficiently and effectively. The current state of the art uses the final title of the review to rank documents using BERT-based neural neural rankers. However, the final title is only formulated at the end of the review process, which makes this approach impractical as it relies on ex post facto information. At the time of screening, only a rough working title is available, with which the BERT-based ranker achieves is significantly worse than the final title. In this paper, we explore alternative sources of queries for screening prioritisation, such as the Boolean query used to retrieve the set of documents to be screened, and queries generated by instruction-based generative large language models such as ChatGPT and Alpaca. Our best approach is not only practical based on the information available at screening time, but is similar in effectiveness with the final title.Comment: Preprints for Accepted paper in SIGIR-AP-202

    Search strategy formulation for systematic reviews: Issues, challenges and opportunities

    Get PDF
    Systematic literature reviews play a vital role in identifying the best available evidence for health and social care research, policy, and practice. The resources required to produce systematic reviews can be significant, and a key to the success of any review is the search strategy used to identify relevant literature. However, the methods used to construct search strategies can be complex, time consuming, resource intensive and error prone. In this review, we examine the state of the art in resolving complex structured information needs, focusing primarily on the healthcare context. We analyse the literature to identify key challenges and issues and explore appropriate solutions and workarounds. From this analysis we propose a way forward to facilitate trust and to aid explainability and transparency, reproducibility and replicability through a set of key design principles for tools to support the development of search strategies in systematic literature reviews

    Automated and Improved Search Query Effectiveness Design for Systematic Literature Reviews

    Full text link
    This research explores and investigates strategies towards automation of the systematic literature review (SLR) process. SLR is a valuable research method that follows a comprehensive, transparent, and reproducible research methodology. SLRs are at the heart of evidence-based research in various research domains, from healthcare to software engineering. They allow researchers to systematically collect and integrate empirical evidence in response to a focused research question, setting the foundation for future research. SLRs are also beneficial to researchers in learning about the state of the art of research and enriching their knowledge of a topic of research. Given their demonstrated value, SLRs are becoming an increasingly popular type of publication in different disciplines. Despite the valuable contributions of SLRs to science, performing timely, reliable, comprehensive, and unbiased SLRs is a challenging endeavour. With the rapid growth in primary research published every year, SLRs might fail to provide complete coverage of existing evidence and even end up being outdated by the time of publication. These challenges have sparked motivation and discussion in research communities to explore automation techniques to support the SLR process. In investigating automatic methods for supporting the systematic review process, this thesis develops three main areas. First, by conducting a systematic literature review, we found the state of the art of automation techniques that are employed to facilitate the systematic review process. Then, in the second study, we identified the real challenges researchers face when conducting SLRs, through an empirical study. Moreover, we distinguished solutions that help researchers to overcome these challenges. We also identified the researchers' concerns regarding adopting automation techniques in SLR practice. Finally, in the third study, we leveraged the findings of our previous studies to investigate a solution to facilitate the SLR search process. We evaluated our proposed method by running some experiments

    Information retrieval and text mining technologies for chemistry

    Get PDF
    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.A.V. and M.K. acknowledge funding from the European Community’s Horizon 2020 Program (project reference: 654021 - OpenMinted). M.K. additionally acknowledges the Encomienda MINETAD-CNIO as part of the Plan for the Advancement of Language Technology. O.R. and J.O. thank the Foundation for Applied Medical Research (FIMA), University of Navarra (Pamplona, Spain). This work was partially funded by Consellería de Cultura, Educación e Ordenación Universitaria (Xunta de Galicia), and FEDER (European Union), and the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE 2020 (POCI-01-0145-FEDER-006684). We thank Iñigo Garciá -Yoldi for useful feedback and discussions during the preparation of the manuscript.info:eu-repo/semantics/publishedVersio

    Identifying Relevant Evidence for Systematic Reviews and Review Updates

    Get PDF
    Systematic reviews identify, assess and synthesise the evidence available to answer complex research questions. They are essential in healthcare, where the volume of evidence in scientific research publications is vast and cannot feasibly be identified or analysed by individual clinicians or decision makers. However, the process of creating a systematic review is time consuming and expensive. The pace of scientific publication in medicine and related fields also means that evidence bases are continually changing and review conclusions can quickly become out of date. Therefore, developing methods to support the creating and updating of reviews is essential to reduce the workload required and thereby ensure that reviews remain up to date. This research aims to support systematic reviews, thus improving healthcare through natural language processing and information retrieval techniques. More specifically, this thesis aims to support the process of identifying relevant evidence for systematic reviews and review updates to reduce the workload required from researchers. This research proposes methods to improve studies ranking for systematic reviews. In addition, this thesis describes a dataset of systematic review updates in the field of medicine created using 25 Cochrane reviews. Moreover, this thesis develops an algorithm to automatically refine the Boolean query to improve the identification of relevant studies for review updates. The research demonstrates that automating the process of identifying relevant evidence can reduce the workload of conducting and updating systematic reviews

    Managing tail latency in large scale information retrieval systems

    Get PDF
    As both the availability of internet access and the prominence of smart devices continue to increase, data is being generated at a rate faster than ever before. This massive increase in data production comes with many challenges, including efficiency concerns for the storage and retrieval of such large-scale data. However, users have grown to expect the sub-second response times that are common in most modern search engines, creating a problem - how can such large amounts of data continue to be served efficiently enough to satisfy end users? This dissertation investigates several issues regarding tail latency in large-scale information retrieval systems. Tail latency corresponds to the high percentile latency that is observed from a system - in the case of search, this latency typically corresponds to how long it takes for a query to be processed. In particular, keeping tail latency as low as possible translates to a good experience for all users, as tail latency is directly related to the worst-case latency and hence, the worst possible user experience. The key idea in targeting tail latency is to move from questions such as "what is the median latency of our search engine?" to questions which more accurately capture user experience such as "how many queries take more than 200ms to return answers?" or "what is the worst case latency that a user may be subject to, and how often might it occur?" While various strategies exist for efficiently processing queries over large textual corpora, prior research has focused almost entirely on improvements to the average processing time or cost of search systems. As a first contribution, we examine some state-of-the-art retrieval algorithms for two popular index organizations, and discuss the trade-offs between them, paying special attention to the notion of tail latency. This research uncovers a number of observations that are subsequently leveraged for improved search efficiency and effectiveness. We then propose and solve a new problem, which involves processing a number of related queries together, known as multi-queries, to yield higher quality search results. We experiment with a number of algorithmic approaches to efficiently process these multi-queries, and report on the cost, efficiency, and effectiveness trade-offs present with each. Ultimately, we find that some solutions yield a low tail latency, and are hence suitable for use in real-time search environments. Finally, we examine how predictive models can be used to improve the tail latency and end-to-end cost of a commonly used multi-stage retrieval architecture without impacting result effectiveness. By combining ideas from numerous areas of information retrieval, we propose a prediction framework which can be used for training and evaluating several efficiency/effectiveness trade-off parameters, resulting in improved trade-offs between cost, result quality, and tail latency

    Towards automation of systematic reviews using natural language processing, machine learning, and deep learning: a comprehensive review.

    Get PDF
    Systematic reviews (SRs) constitute a critical foundation for evidence-based decision-making and policy formulation across various disciplines, particularly in healthcare and beyond. However, the inherently rigorous and structured nature of the SR process renders it laborious for human reviewers. Moreover, the exponential growth in daily published literature exacerbates the challenge, as SRs risk missing out on incorporating recent studies that could potentially influence research outcomes. This pressing need to streamline and enhance the efficiency of SRs has prompted significant interest in leveraging Artificial Intelligence (AI) techniques to automate various stages of the SR process. This review paper provides a comprehensive overview of the current AI methods employed for SR automation, a subject area that has not been exhaustively covered in previous literature. Through an extensive analysis of 52 related works and an original online survey, the primary AI techniques and their applications in automating key SR stages, such as search, screening, data extraction, and risk of bias assessment, are identified. The survey results offer practical insights into the current practices, experiences, opinions, and expectations of SR practitioners and researchers regarding future SR automation. Synthesis of the literature review and survey findings highlights gaps and challenges in the current landscape of SR automation using AI techniques. Based on these insights, potential future directions are discussed. This review aims to equip researchers and practitioners with a foundational understanding of the basic concepts, primary methodologies and recent advancements in AI-driven SR automation, while guiding computer scientists in exploring novel techniques to further invigorate and advance this field
    corecore