438 research outputs found
Uncertainty-aware multiple-instance learning for reliable classification:Application to optical coherence tomography
Deep learning classification models for medical image analysis often perform well on data from scanners that were used to acquire the training data. However, when these models are applied to data from different vendors, their performance tends to drop substantially. Artifacts that only occur within scans from specific scanners are major causes of this poor generalizability. We aimed to enhance the reliability of deep learning classification models using a novel method called Uncertainty-Based Instance eXclusion (UBIX). UBIX is an inference-time module that can be employed in multiple-instance learning (MIL) settings. MIL is a paradigm in which instances (generally crops or slices) of a bag (generally an image) contribute towards a bag-level output. Instead of assuming equal contribution of all instances to the bag-level output, UBIX detects instances corrupted due to local artifacts on-the-fly using uncertainty estimation, reducing or fully ignoring their contributions before MIL pooling. In our experiments, instances are 2D slices and bags are volumetric images, but alternative definitions are also possible. Although UBIX is generally applicable to diverse classification tasks, we focused on the staging of age-related macular degeneration in optical coherence tomography. Our models were trained on data from a single scanner and tested on external datasets from different vendors, which included vendor-specific artifacts. UBIX showed reliable behavior, with a slight decrease in performance (a decrease of the quadratic weighted kappa (κw) from 0.861 to 0.708), when applied to images from different vendors containing artifacts; while a state-of-the-art 3D neural network without UBIX suffered from a significant detriment of performance (κw from 0.852 to 0.084) on the same test set. We showed that instances with unseen artifacts can be identified with OOD detection. UBIX can reduce their contribution to the bag-level predictions, improving reliability without retraining on new data. This potentially increases the applicability of artificial intelligence models to data from other scanners than the ones for which they were developed. The source code for UBIX, including trained model weights, is publicly available through https://github.com/qurAI-amsterdam/ubix-for-reliable-classification.</p
What limits performance of weakly supervised deep learning for chest CT classification?
Weakly supervised learning with noisy data has drawn attention in the medical
imaging community due to the sparsity of high-quality disease labels. However,
little is known about the limitations of such weakly supervised learning and
the effect of these constraints on disease classification performance. In this
paper, we test the effects of such weak supervision by examining model
tolerance for three conditions. First, we examined model tolerance for noisy
data by incrementally increasing error in the labels within the training data.
Second, we assessed the impact of dataset size by varying the amount of
training data. Third, we compared performance differences between binary and
multi-label classification. Results demonstrated that the model could endure up
to 10% added label error before experiencing a decline in disease
classification performance. Disease classification performance steadily rose as
the amount of training data was increased for all disease classes, before
experiencing a plateau in performance at 75% of training data. Last, the binary
model outperformed the multilabel model in every disease category. However,
such interpretations may be misleading, as the binary model was heavily
influenced by co-occurring diseases and may not have learned the specific
features of the disease in the image. In conclusion, this study may help the
medical imaging community understand the benefits and risks of weak supervision
with noisy labels. Such studies demonstrate the need to build diverse,
large-scale datasets and to develop explainable and responsible AI.Comment: 16 pages , 8 figures. arXiv admin note: text overlap with
arXiv:2202.1170
Supervised and weakly supervised counting-by-segmentation: the fluorescent microscopy use case
This thesis focuses on automating the time-consuming task of manually counting activated neurons in fluorescent microscopy images, which is used to study the mechanisms underlying torpor. The traditional method of manual annotation can introduce bias and delay the outcome of experiments, so the author investigates a deep-learning-based procedure to automatize this task. The author explores two of the main convolutional-neural-network (CNNs) state-of-the-art architectures: UNet and ResUnet family model, and uses a counting-by-segmentation strategy to provide a justification of the objects considered during the counting process. The author also explores a weakly-supervised learning strategy that exploits only dot annotations. The author quantifies the advantages in terms of data reduction and counting performance boost obtainable with a transfer-learning approach and, specifically, a fine-tuning procedure. The author released the dataset used for the supervised use case and all the pre-training models, and designed a web application to share both the counting process pipeline developed in this work and the models pre-trained on the dataset analyzed in this work
Recommended from our members
Novel Image Acquisition and Reconstruction Methods: Towards Autonomous MRI
Magnetic Resonance Imaging (MR Imaging, or MRI) offers superior soft-tissue contrast compared to other medical imaging modalities. However, access to MRI across developing countries ranges from prohibitive to scarcely available. The lack of educational facilities and the excessive costs involved in imparting technical training have resulted in a lack of skilled human resources required to operate MRI systems in developing countries.
While diagnostic medical imaging improves the utilization of facility-based rural health services and impacts management decisions, MRI requires technical expertise to set up the patient, acquire, visualize, and interpret data. The availability of such local expertise in underserved geographies is challenging. Inefficient workflows and usage of MRI result in challenges related to financial and temporal access in countries with higher scanner densities than the global average of 5.3 per million people.
MRI is routinely employed for neuroimaging and, in particular, for dementia screening. Dementia affected 50 million people worldwide in 2018, with an estimated economic impact of US $1 trillion a year, and Alzheimer’s Disease (AD) accounts for up to 60–80% of dementia cases. However, AD-imaging using MRI is time-consuming, and protocol optimization to accelerate MR Imaging requires local expertise since each pulse sequence involves multiple configurable parameters that need optimization for acquisition time, image contrast, and image quality. The lack of this expertise contributes to the highly inefficient utilization of MRI services, diminishing their clinical value.
Augmenting human capabilities can tackle these challenges and standardize the practice. Autonomous and time-efficient acquisition, reconstruction, and visualization schemes to maximize MRI hardware usage and solutions that reduce reliance on human operation of MRI systems could alleviate some of the challenges associated with the requirement/absence of skilled human resources.
We first present a preliminary demonstration of AMRI that simplifies the end-to-end MRI workflow of registering the subject, setting up and invoking an imaging session, acquiring and reconstructing the data, and visualizing the images. Our initial implementation of AMRI separates the required intelligence and user interaction from the acquisition hardware. AMRI performs intelligent protocolling and intelligent slice planning. Intelligent protocolling optimizes contrast value while satisfying signal-to-noise ratio and acquisition time constraints. We acquired data from four healthy volunteers across three experiments that differed in acquisition time constraints. AMRI achieved comparable image quality across all experiments despite optimizing for acquisition duration, therefore indirectly optimizing for MR Value – a metric to quantify the value of MRI. We believe we have demonstrated the first Autonomous MRI of the brain. We also present preliminary results from a deep learning (DL) tool for generating first-read text-based radiological reports directly from input brain images. It can potentially alleviate the burden on radiologists who experience the seventh-highest levels of burnout among all physicians, according to a 2015 survey.
Next, we accelerate the routine brain imaging protocol employed at the Columbia University Irving Medical Center and leverage DL methods to boost image quality via image-denoising. Since MR physics dictates that the volume of the object being imaged influences the amount of signal received, we also demonstrate subject-specific image-denoising. The accelerated protocol resulted in a factor of 1.94 gain in imaging throughput, translating to a 72.51% increase in MR Value. We also demonstrate that this accelerated protocol can potentially be employed for AD imaging.
Finally, we present ArtifactID – a DL tool to identify Gibbs ringing in low-field (0.36 T) and high-field (1.5 T and 3.0 T) brain MRI. We train separate binary classification models for low-field and high-field data, and visual explanations are generated via the Grad-CAM explainable AI method to help develop trust in the models’ predictions. We also demonstrate detecting motion using an accelerometer in a low-field MRI scanner since low-field MRI is prone to artifacts.
In conclusion, our novel contributions in this work include: i) a software framework to demonstrate an initial implementation of autonomous brain imaging; ii) an end-to-end framework that leverages intelligent protocolling and DL-based image-denoising that can potentially be employed for accelerated AD imaging; and iii) a DL-based tool for automated identification of Gibbs ringing artifacts that may interfere with diagnosis at the time of radiological reading.
We envision AMRI augmenting human expertise to alleviate the challenges associated with the scarcity of skilled human resources and contributing to globally accessible MRI
Intelligent Sensors for Human Motion Analysis
The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems
Computational Pathology: A Survey Review and The Way Forward
Computational Pathology CPath is an interdisciplinary science that augments
developments of computational approaches to analyze and model medical
histopathology images. The main objective for CPath is to develop
infrastructure and workflows of digital diagnostics as an assistive CAD system
for clinical pathology, facilitating transformational changes in the diagnosis
and treatment of cancer that are mainly address by CPath tools. With
evergrowing developments in deep learning and computer vision algorithms, and
the ease of the data flow from digital pathology, currently CPath is witnessing
a paradigm shift. Despite the sheer volume of engineering and scientific works
being introduced for cancer image analysis, there is still a considerable gap
of adopting and integrating these algorithms in clinical practice. This raises
a significant question regarding the direction and trends that are undertaken
in CPath. In this article we provide a comprehensive review of more than 800
papers to address the challenges faced in problem design all-the-way to the
application and implementation viewpoints. We have catalogued each paper into a
model-card by examining the key works and challenges faced to layout the
current landscape in CPath. We hope this helps the community to locate relevant
works and facilitate understanding of the field's future directions. In a
nutshell, we oversee the CPath developments in cycle of stages which are
required to be cohesively linked together to address the challenges associated
with such multidisciplinary science. We overview this cycle from different
perspectives of data-centric, model-centric, and application-centric problems.
We finally sketch remaining challenges and provide directions for future
technical developments and clinical integration of CPath
(https://github.com/AtlasAnalyticsLab/CPath_Survey).Comment: Accepted in Elsevier Journal of Pathology Informatics (JPI) 202
Symbiotic deep learning for medical image analysis with applications in real-time diagnosis for fetal ultrasound screening
The last hundred years have seen a monumental rise in the power and capability of machines to
perform intelligent tasks in the stead of previously human operators. This rise is not expected
to slow down any time soon and what this means for society and humanity as a whole remains
to be seen. The overwhelming notion is that with the right goals in mind, the growing influence
of machines on our every day tasks will enable humanity to give more attention to the truly
groundbreaking challenges that we all face together. This will usher in a new age of human
machine collaboration in which humans and machines may work side by side to achieve greater
heights for all of humanity. Intelligent systems are useful in isolation, but the true benefits of
intelligent systems come to the fore in complex systems where the interaction between humans
and machines can be made seamless, and it is this goal of symbiosis between human and machine
that may democratise complex knowledge, which motivates this thesis. In the recent past, datadriven
methods have come to the fore and now represent the state-of-the-art in many different
fields. Alongside the shift from rule-based towards data-driven methods we have also seen a
shift in how humans interact with these technologies. Human computer interaction is changing
in response to data-driven methods and new techniques must be developed to enable the same
symbiosis between man and machine for data-driven methods as for previous formula-driven
technology.
We address five key challenges which need to be overcome for data-driven human-in-the-loop
computing to reach maturity. These are (1) the ’Categorisation Challenge’ where we examine
existing work and form a taxonomy of the different methods being utilised for data-driven
human-in-the-loop computing; (2) the ’Confidence Challenge’, where data-driven methods must
communicate interpretable beliefs in how confident their predictions are; (3) the ’Complexity
Challenge’ where the aim of reasoned communication becomes increasingly important as the
complexity of tasks and methods to solve also increases; (4) the ’Classification Challenge’ in
which we look at how complex methods can be separated in order to provide greater reasoning
in complex classification tasks; and finally (5) the ’Curation Challenge’ where we challenge the
assumptions around bottleneck creation for the development of supervised learning methods.Open Acces
Intelligent Biosignal Processing in Wearable and Implantable Sensors
This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine
- …