359 research outputs found
A systematic review of natural language processing applied to radiology reports
NLP has a significant role in advancing healthcare and has been found to be
key in extracting structured information from radiology reports. Understanding
recent developments in NLP application to radiology is of significance but
recent reviews on this are limited. This study systematically assesses recent
literature in NLP applied to radiology reports. Our automated literature search
yields 4,799 results using automated filtering, metadata enriching steps and
citation search combined with manual review. Our analysis is based on 21
variables including radiology characteristics, NLP methodology, performance,
study, and clinical application characteristics. We present a comprehensive
analysis of the 164 publications retrieved with each categorised into one of 6
clinical application categories. Deep learning use increases but conventional
machine learning approaches are still prevalent. Deep learning remains
challenged when data is scarce and there is little evidence of adoption into
clinical practice. Despite 17% of studies reporting greater than 0.85 F1
scores, it is hard to comparatively evaluate these approaches given that most
of them use different datasets. Only 14 studies made their data and 15 their
code available with 10 externally validating results. Automated understanding
of clinical narratives of the radiology reports has the potential to enhance
the healthcare process but reproducibility and explainability of models are
important if the domain is to move applications into clinical use. More could
be done to share code enabling validation of methods on different institutional
data and to reduce heterogeneity in reporting of study properties allowing
inter-study comparisons. Our results have significance for researchers
providing a systematic synthesis of existing work to build on, identify gaps,
opportunities for collaboration and avoid duplication
A reporting and analysis framework for structured evaluation of COVID-19 clinical and imaging data
The COVID-19 pandemic has worldwide individual and socioeconomic consequences. Chest computed tomography has been found to support diagnostics and disease monitoring. A standardized approach to generate, collect, analyze, and share clinical and imaging information in the highest quality possible is urgently needed. We developed systematic, computer-assisted and context-guided electronic data capture on the FDA-approved mint LesionTM software platform to enable cloud-based data collection and real-time analysis. The acquisition and annotation include radiological findings and radiomics performed directly on primary imaging data together with information from the patient history and clinical data. As proof of concept, anonymized data of 283 patients with either suspected or confirmed SARS-CoV-2 infection from eight European medical centers were aggregated in data analysis dashboards. Aggregated data were compared to key findings of landmark research literature. This concept has been chosen for use in the national COVID-19 response of the radiological departments of all university hospitals in Germany
The Revival of the Notes Field: Leveraging the Unstructured Content in Electronic Health Records
Problem: Clinical practice requires the production of a time- and resource-consuming great amount of notes. They contain relevant information, but their secondary use is almost impossible, due to their unstructured nature. Researchers are trying to address this problems, with traditional and promising novel techniques. Application in real hospital settings seems not to be possible yet, though, both because of relatively small and dirty dataset, and for the lack of language-specific pre-trained models.Aim: Our aim is to demonstrate the potential of the above techniques, but also raise awareness of the still open challenges that the scientific communities of IT and medical practitioners must jointly address to realize the full potential of unstructured content that is daily produced and digitized in hospital settings, both to improve its data quality and leverage the insights from data-driven predictive models.Methods: To this extent, we present a narrative literature review of the most recent and relevant contributions to leverage the application of Natural Language Processing techniques to the free-text content electronic patient records. In particular, we focused on four selected application domains, namely: data quality, information extraction, sentiment analysis and predictive models, and automated patient cohort selection. Then, we will present a few empirical studies that we undertook at a major teaching hospital specializing in musculoskeletal diseases.Results: We provide the reader with some simple and affordable pipelines, which demonstrate the feasibility of reaching literature performance levels with a single institution non-English dataset. In such a way, we bridged literature and real world needs, performing a step further toward the revival of notes fields
Differential diagnosis of neurodegenerative dementias with the explainable MRI based machine learning algorithm MUQUBIA
Biomarker-based differential diagnosis of the most common forms of dementia is becoming increasingly important. Machine learning (ML) may be able to address this challenge. The aim of this study was to develop and interpret a ML algorithm capable of differentiating Alzheimerâs dementia, frontotemporal dementia, dementia with Lewy bodies and cognitively normal control subjects based on sociodemographic, clinical, and magnetic resonance imaging (MRI) variables. 506 subjects from 5 databases were included. MRI images were processed with FreeSurfer, LPA, and TRACULA to obtain brain volumes and thicknesses, white matter lesions and diffusion metrics. MRI metrics were used in conjunction with clinical and demographic data to perform differential diagnosis based on a Support Vector Machine model called MUQUBIA (Multimodal Quantification of Brain whIte matter biomArkers). Age, gender, Clinical Dementia Rating (CDR) Dementia Staging Instrument, and 19 imaging features formed the best set of discriminative features. The predictive model performed with an overall Area Under the Curve of 98%, high overall precision (88%), recall (88%), and F1 scores (88%) in the test group, and good Label Ranking Average Precision score (0.95) in a subset of neuropathologically assessed patients. The results of MUQUBIA were explained by the SHapley Additive exPlanations (SHAP) method. The MUQUBIA algorithm successfully classified various dementias with good performance using cost-effective clinical and MRI information, and with independent validation, has the potential to assist physicians in their clinical diagnosis
Malaysian Medical Device Regulation for Artificial Intelligence in Healthcare: Have all the pieces fallen into position?
Artificial Intelligence (AI) ability of self-learning and adaptation has challenged the medical device regulation in overseeing the safety and effectiveness of medical devices. Thus, this research aims to evaluate the adequacy of the pre-market requirements under the Medical Device Act 2012 in governing AI modification. Employing the doctrinal research methodology, systematic means of legal reasoning pertinent to AI for healthcare applications are produced. An effective medical device regulation is pivotal to foster trustworthiness in the governance and adoption of AI. However, the research findings indicate the deficiency of the current conformity assessment for medical devices in addressing AI modifications.
Keywords: Artificial Intelligence and Law, Artificial Intelligence and Medical Device Regulation, Malaysian Medical Device Regulation
eISSN: 2398-4287© 2021. The Authors. Published for AMER ABRA cE-Bs by e-International Publishing House, Ltd., UK. This is an open access article under the CC BYNC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peerâreview under responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians/Africans/Arabians) and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia.
DOI: https://doi.org/10.21834/ebpj.v6i16.263
MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks
Predicting multiple real-world tasks in a single model often requires a
particularly diverse feature space. Multimodal (MM) models aim to extract the
synergistic predictive potential of multiple data types to create a shared
feature space with aligned semantic meaning across inputs of drastically
varying sizes (i.e. images, text, sound). Most current MM architectures fuse
these representations in parallel, which not only limits their interpretability
but also creates a dependency on modality availability. We present MultiModN, a
multimodal, modular network that fuses latent representations in a sequence of
any number, combination, or type of modality while providing granular real-time
predictive feedback on any number or combination of predictive tasks.
MultiModN's composable pipeline is interpretable-by-design, as well as innately
multi-task and robust to the fundamental issue of biased missingness. We
perform four experiments on several benchmark MM datasets across 10 real-world
tasks (predicting medical diagnoses, academic performance, and weather), and
show that MultiModN's sequential MM fusion does not compromise performance
compared with a baseline of parallel fusion. By simulating the challenging bias
of missing not-at-random (MNAR), this work shows that, contrary to MultiModN,
parallel fusion baselines erroneously learn MNAR and suffer catastrophic
failure when faced with different patterns of MNAR at inference. To the best of
our knowledge, this is the first inherently MNAR-resistant approach to MM
modeling. In conclusion, MultiModN provides granular insights, robustness, and
flexibility without compromising performance.Comment: Accepted as a full paper at NeurIPS 2023 in New Orleans, US
- âŠ