3 research outputs found
Question answering systems for health professionals at the point of care -- a systematic review
Objective: Question answering (QA) systems have the potential to improve the
quality of clinical care by providing health professionals with the latest and
most relevant evidence. However, QA systems have not been widely adopted. This
systematic review aims to characterize current medical QA systems, assess their
suitability for healthcare, and identify areas of improvement.
Materials and methods: We searched PubMed, IEEE Xplore, ACM Digital Library,
ACL Anthology and forward and backward citations on 7th February 2023. We
included peer-reviewed journal and conference papers describing the design and
evaluation of biomedical QA systems. Two reviewers screened titles, abstracts,
and full-text articles. We conducted a narrative synthesis and risk of bias
assessment for each study. We assessed the utility of biomedical QA systems.
Results: We included 79 studies and identified themes, including question
realism, answer reliability, answer utility, clinical specialism, systems,
usability, and evaluation methods. Clinicians' questions used to train and
evaluate QA systems were restricted to certain sources, types and complexity
levels. No system communicated confidence levels in the answers or sources.
Many studies suffered from high risks of bias and applicability concerns. Only
8 studies completely satisfied any criterion for clinical utility, and only 7
reported user evaluations. Most systems were built with limited input from
clinicians.
Discussion: While machine learning methods have led to increased accuracy,
most studies imperfectly reflected real-world healthcare information needs. Key
research priorities include developing more realistic healthcare QA datasets
and considering the reliability of answer sources, rather than merely focusing
on accuracy.Comment: Accepted to the Journal of the American Medical Informatics
Association (JAMIA
Automatic medical knowledge acquisition using question-answering
We aim at proposing a rule generation approach to automatically acquire structured rules that can be used in decision support systems for drug prescription. We apply a question-answering engine to answer specific information requests. The rule generation is seen as an equation problem, where the factors are known items of the rule (e.g., an infectious disease, caused by a given bacteria) and solutions are answered by the engine (e.g., some antibiotics). A top precision of 0.64 is reported, which means, for about two third of the knowledge rules of the benchmark, one of the recommended antibiotic was automatically acquired by the rule generation method. These results suggest that a significant fraction of the medical knowledge can be obtained by such an automatic text mining approach