387,532 research outputs found
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Deep learning based medical image classifiers have shown remarkable prowess
in various application areas like ophthalmology, dermatology, pathology, and
radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD)
systems in real clinical setups is severely limited primarily because their
decision-making process remains largely obscure. This work aims at elucidating
a deep learning based medical image classifier by verifying that the model
learns and utilizes similar disease-related concepts as described and employed
by dermatologists. We used a well-trained and high performing neural network
developed by REasoning for COmplex Data (RECOD) Lab for classification of three
skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and
performed a detailed analysis on its latent space. Two well established and
publicly available skin disease datasets, PH2 and derm7pt, are used for
experimentation. Human understandable concepts are mapped to RECOD image
classification model with the help of Concept Activation Vectors (CAVs),
introducing a novel training and significance testing paradigm for CAVs. Our
results on an independent evaluation set clearly shows that the classifier
learns and encodes human understandable concepts in its latent representation.
Additionally, TCAV scores (Testing with CAVs) suggest that the neural network
indeed makes use of disease-related concepts in the correct way when making
predictions. We anticipate that this work can not only increase confidence of
medical practitioners on CAD but also serve as a stepping stone for further
development of CAV-based neural network interpretation methods.Comment: Accepted for the IEEE International Joint Conference on Neural
Networks (IJCNN) 202
Artificial intelligence in dry eye disease
Dry eye disease (DED) has a prevalence of between 5 and 50%, depending on the diagnostic criteria used and population under study. However, it remains one of the most underdiagnosed and undertreated conditions in ophthalmology. Many tests used in the diagnosis of DED rely on an experienced observer for image interpretation, which may be considered subjective and result in variation in diagnosis. Since artificial intelligence (AI) systems are capable of advanced problem solving, use of such techniques could lead to more objective diagnosis. Although the term ‘AI’ is commonly used, recent success in its applications to medicine is mainly due to advancements in the sub-field of machine learning, which has been used to automatically classify images and predict medical outcomes. Powerful machine learning techniques have been harnessed to understand nuances in patient data and medical images, aiming for consistent diagnosis and stratification of disease severity. This is the first literature review on the use of AI in DED. We provide a brief introduction to AI, report its current use in DED research and its potential for application in the clinic. Our review found that AI has been employed in a wide range of DED clinical tests and research applications, primarily for interpretation of interferometry, slit-lamp and meibography images. While initial results are promising, much work is still needed on model development, clinical testing and standardisation
Towards Generalist Biomedical AI
Medicine is inherently multimodal, with rich data modalities spanning text,
imaging, genomics, and more. Generalist biomedical artificial intelligence (AI)
systems that flexibly encode, integrate, and interpret this data at scale can
potentially enable impactful applications ranging from scientific discovery to
care delivery. To enable the development of these models, we first curate
MultiMedBench, a new multimodal biomedical benchmark. MultiMedBench encompasses
14 diverse tasks such as medical question answering, mammography and
dermatology image interpretation, radiology report generation and
summarization, and genomic variant calling. We then introduce Med-PaLM
Multimodal (Med-PaLM M), our proof of concept for a generalist biomedical AI
system. Med-PaLM M is a large multimodal generative model that flexibly encodes
and interprets biomedical data including clinical language, imaging, and
genomics with the same set of model weights. Med-PaLM M reaches performance
competitive with or exceeding the state of the art on all MultiMedBench tasks,
often surpassing specialist models by a wide margin. We also report examples of
zero-shot generalization to novel medical concepts and tasks, positive transfer
learning across tasks, and emergent zero-shot medical reasoning. To further
probe the capabilities and limitations of Med-PaLM M, we conduct a radiologist
evaluation of model-generated (and human) chest X-ray reports and observe
encouraging performance across model scales. In a side-by-side ranking on 246
retrospective chest X-rays, clinicians express a pairwise preference for
Med-PaLM M reports over those produced by radiologists in up to 40.50% of
cases, suggesting potential clinical utility. While considerable work is needed
to validate these models in real-world use cases, our results represent a
milestone towards the development of generalist biomedical AI systems
Structured computer-based training in the interpretation of neuroradiological images
Computer-based systems may be able to address a recognised need throughout the medical profession for a more structured approach to training. We describe a combined training system for neuroradiology, the MR Tutor that differs from previous approaches to computer-assisted training in radiology in that it provides case-based tuition whereby the system and user communicate in terms of a well-founded Image Description Language. The system implements a novel method of visualisation and interaction with a library of fully described cases utilising statistical models of similarity, typicality and disease categorisation of cases. We describe the rationale, knowledge representation and design of the system, and provide a formative evaluation of its usability and effectiveness
Prospects for Theranostics in Neurosurgical Imaging: Empowering Confocal Laser Endomicroscopy Diagnostics via Deep Learning
Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence
imaging technology that has the potential to increase intraoperative precision,
extend resection, and tailor surgery for malignant invasive brain tumors
because of its subcellular dimension resolution. Despite its promising
diagnostic potential, interpreting the gray tone fluorescence images can be
difficult for untrained users. In this review, we provide a detailed
description of bioinformatical analysis methodology of CLE images that begins
to assist the neurosurgeon and pathologist to rapidly connect on-the-fly
intraoperative imaging, pathology, and surgical observation into a
conclusionary system within the concept of theranostics. We present an overview
and discuss deep learning models for automatic detection of the diagnostic CLE
images and discuss various training regimes and ensemble modeling effect on the
power of deep learning predictive models. Two major approaches reviewed in this
paper include the models that can automatically classify CLE images into
diagnostic/nondiagnostic, glioma/nonglioma, tumor/injury/normal categories and
models that can localize histological features on the CLE images using weakly
supervised methods. We also briefly review advances in the deep learning
approaches used for CLE image analysis in other organs. Significant advances in
speed and precision of automated diagnostic frame selection would augment the
diagnostic potential of CLE, improve operative workflow and integration into
brain tumor surgery. Such technology and bioinformatics analytics lend
themselves to improved precision, personalization, and theranostics in brain
tumor treatment.Comment: See the final version published in Frontiers in Oncology here:
https://www.frontiersin.org/articles/10.3389/fonc.2018.00240/ful
Recommended from our members
Feasibility Evaluation of Commercially Available Video Conferencing Devices to Technically Direct Untrained Nonmedical Personnel to Perform a Rapid Trauma Ultrasound Examination.
Introduction: Point-of-care ultrasound (POCUS) is a rapidly expanding discipline that has proven to be a valuable modality in the hospital setting. Recent evidence has demonstrated the utility of commercially available video conferencing technologies, namely, FaceTime (Apple Inc, Cupertino, CA, USA) and Google Glass (Google Inc, Mountain View, CA, USA), to allow an expert POCUS examiner to remotely guide a novice medical professional. However, few studies have evaluated the ability to use these teleultrasound technologies to guide a nonmedical novice to perform an acute care POCUS examination for cardiac, pulmonary, and abdominal assessments. Additionally, few studies have shown the ability of a POCUS-trained cardiac anesthesiologist to perform the role of an expert instructor. This study sought to evaluate the ability of a POCUS-trained anesthesiologist to remotely guide a nonmedically trained participant to perform an acute care POCUS examination. Methods: A total of 21 nonmedically trained undergraduate students who had no prior ultrasound experience were recruited to perform a three-part ultrasound examination on a standardized patient with the guidance of a remote expert who was a POCUS-trained cardiac anesthesiologist. The examination included the following acute care POCUS topics: (1) cardiac function via parasternal long/short axis views, (2) pneumothorax assessment via pleural sliding exam via anterior lung views, and (3) abdominal free fluid exam via right upper quadrant abdominal view. Each examiner was given a handout with static images of probe placement and actual ultrasound images for the three views. After a brief 8 min tutorial on the teleultrasound technologies, a connection was established with the expert, and they were guided through the acute care POCUS exam. Each view was deemed to be complete when the expert sonographer was satisfied with the obtained image or if the expert sonographer determined that the image could not be obtained after 5 min. Image quality was scored on a previously validated 0 to 4 grading scale. The entire session was recorded, and the image quality was scored during the exam by the remote expert instructor as well as by a separate POCUS-trained, blinded expert anesthesiologist. Results: A total of 21 subjects completed the study. The average total time for the exam was 8.5 min (standard deviation = 4.6). A comparison between the live expert examiner and the blinded postexam reviewer showed a 100% agreement between image interpretations. A review of the exams rated as three or higher demonstrated that 87% of abdominal, 90% of cardiac, and 95% of pulmonary exams achieved this level of image quality. A satisfaction survey of the novice users demonstrated higher ease of following commands for the cardiac and pulmonary exams compared to the abdominal exam. Conclusions: The results from this pilot study demonstrate that nonmedically trained individuals can be guided to complete a relevant ultrasound examination within a short period. Further evaluation of using telemedicine technologies to promote POCUS should be evaluated
Medical imaging analysis with artificial neural networks
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging
- …