27 research outputs found
Architectures of Meaning, A Systematic Corpus Analysis of NLP Systems
This paper proposes a novel statistical corpus analysis framework targeted
towards the interpretation of Natural Language Processing (NLP) architectural
patterns at scale. The proposed approach combines saturation-based lexicon
construction, statistical corpus analysis methods and graph collocations to
induce a synthesis representation of NLP architectural patterns from corpora.
The framework is validated in the full corpus of Semeval tasks and demonstrated
coherent architectural patterns which can be used to answer architectural
questions on a data-driven fashion, providing a systematic mechanism to
interpret a largely dynamic and exponentially growing field.Comment: 20 pages, 6 figures, 9 supplementary figures, Lexicon.txt in the
appendi
SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data
This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence
Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2
tasks, a Natural Language Inference (NLI) task, and an evidence selection task
on clinical trial data. The proposed challenges require multi-hop biomedical
and numerical reasoning, which are of significant importance to the development
of systems capable of large-scale interpretation and retrieval of medical
evidence, to provide personalized evidence-based care.
Task 1, the entailment task, received 643 submissions from 40 participants,
and Task 2, the evidence selection task, received 364 submissions from 23
participants. The tasks are challenging, with the majority of submitted systems
failing to significantly outperform the majority class baseline on the
entailment task, and we observe significantly better performance on the
evidence selection task than on the entailment task. Increasing the number of
model parameters leads to a direct increase in performance, far more
significant than the effect of biomedical pre-training. Future works could
explore the limitations of large models for generalization and numerical
inference, and investigate methods to augment clinical datasets to allow for
more rigorous testing and to facilitate fine-tuning.
We envisage that the dataset, models, and results of this task will be useful
to the biomedical NLI and evidence retrieval communities. The dataset,
competition leaderboard, and website are publicly available
NLI4CT: Multi-Evidence Natural Language Inference for Clinical Trial Reports
How can we interpret and retrieve medical evidence to support clinical
decisions? Clinical trial reports (CTR) amassed over the years contain
indispensable information for the development of personalized medicine.
However, it is practically infeasible to manually inspect over 400,000+
clinical trial reports in order to find the best evidence for experimental
treatments. Natural Language Inference (NLI) offers a potential solution to
this problem, by allowing the scalable computation of textual entailment.
However, existing NLI models perform poorly on biomedical corpora, and
previously published datasets fail to capture the full complexity of inference
over CTRs. In this work, we present a novel resource to advance research on NLI
for reasoning on CTRs. The resource includes two main tasks. Firstly, to
determine the inference relation between a natural language statement, and a
CTR. Secondly, to retrieve supporting facts to justify the predicted relation.
We provide NLI4CT, a corpus of 2400 statements and CTRs, annotated for these
tasks. Baselines on this corpus expose the limitations of existing NLI models,
with 6 state-of-the-art NLI models achieving a maximum F1 score of 0.627. To
the best of our knowledge, we are the first to design a task that covers the
interpretation of full CTRs. To encourage further work on this challenging
dataset, we make the corpus, competition leaderboard, website and code to
replicate the baseline experiments available at:
https://github.com/ai-systems/nli4ctComment: 15 page
Erratum to:Development and Evaluation of a New Technological Way of Engaging Patients and Enhancing Understanding of Drug Tolerability in Early Clinical Development: PROACT
INTRODUCTION: During early clinical testing of a new medication, it is critical to understand and characterise patient tolerability. However, in early clinical studies, it is difficult for patients to contribute directly to the sponsors’ understanding of a new compound. Patient reported opinions about clinical tolerability (PROACT) provides a new, simple and innovative way in which patients can collaborate using an application downloaded to a mobile computer or smartphone. METHODS: PROACT was designed with special consideration given to patient confidentiality, patient engagement and data security. A pilot study was conducted to investigate patient uptake of PROACT and to characterize clinical trial information it captured. Patients recruited to Phase I oncology trials at a UK center were eligible to participate but were required to have a tablet computer or smartphone. Patients used PROACT to upload audio/video messages that became available instantly to their clinical team, who were able to reply to the patient within PROACT. The patient’s message was also analyzed, personally-identifiable information removed and anonymized information then made available to the sponsor in an analytics module for decision-making. In parallel, a patient focus group was engaged to provide feedback on communication needs during early clinical trials and the PROACT concept. RESULTS: Of the 16 patients informed of PROACT, 8 had a smart device and consented to take part. Use of PROACT varied and all messages volunteered were relevant and informative for drug development. Topics disclosed included tolerability impacts, study design, and drug formulation. Alignment with the clinical study data provided a richer understanding of tolerability and treatment consequences. This information was available to be shared among the clinical team and the sponsor, to improve patient support and experience. Patient forum feedback endorsed the concept and provided further information to enhance the application. CONCLUSION: Overall, PROACT achieved proof of concept in this small pilot study and delivered a secure end-to-end system that protected patient privacy and provided preliminary insight into patient experiences beyond the usual clinical trial data set. The use of mobile devices to interact actively with participants in clinical trials may be a new way of engaging and empowering patients. Further validation of this technology in larger patient cohorts is ongoing. FUNDING: AstraZeneca