19,391 research outputs found
Towards continual learning in medical imaging
This work investigates continual learning of two segmentation tasks in brain MRI with neural networks. To explore in this context the capabilities of current methods for countering catastrophic forgetting of the first task when a new one is learned, we investigate elastic weight consolidation, a recently proposed method based on Fisher information, originally evaluated on reinforcement learning of Atari games. We use it to sequentially learn segmentation of normal brain structures and then segmentation of white matter lesions. Our findings show this recent method reduces catastrophic forgetting, while large room for improvement exists in these challenging settings for continual learning
Euro-American discussion document on entry and advanced level practice in nuclear medicine
The European Association of Nuclear Medicine Technologist Committee (EANMTC) and the Society of Nuclear Medicine Technologist Section (SNMTS) meet biannually to consider matters of mutual importance. These meetings are held during the SNM and EANM annual conferences. For several years, within these meetings, EANMTC and SNMTS have considered the value of having a Euro-American initiative in defining entry-level and advanced practice competencies for nuclear medicine radiographers (NMRs) and nuclear medicine technologists (NMTs). In June 2009, during the SNM annual conference in Toronto, it was agreed that a Euro-American working party would be established to consider advanced practice. It was recognized that any consideration of a definition for advanced practice would be predicated on an understanding or definition of entry-level practice. As a result, both types of practice would have to be considered. This discussion document outlines some of the background issues associated with advanced practice generally and specifically within nuclear medicine. The primary purpose of this document is to stimulate debate, on a Euro-American level, about the perceived value of advanced practice for NMRs and NMTs within nuclear medicine and to develop an internationally accepted list of entry-level competencies and scope of practice for NMRs and NMTs within nuclear medicine
Training recurrent neural networks robust to incomplete data: application to Alzheimer's disease progression modeling
Disease progression modeling (DPM) using longitudinal data is a challenging
machine learning task. Existing DPM algorithms neglect temporal dependencies
among measurements, make parametric assumptions about biomarker trajectories,
do not model multiple biomarkers jointly, and need an alignment of subjects'
trajectories. In this paper, recurrent neural networks (RNNs) are utilized to
address these issues. However, in many cases, longitudinal cohorts contain
incomplete data, which hinders the application of standard RNNs and requires a
pre-processing step such as imputation of the missing values. Instead, we
propose a generalized training rule for the most widely used RNN architecture,
long short-term memory (LSTM) networks, that can handle both missing predictor
and target values. The proposed LSTM algorithm is applied to model the
progression of Alzheimer's disease (AD) using six volumetric magnetic resonance
imaging (MRI) biomarkers, i.e., volumes of ventricles, hippocampus, whole
brain, fusiform, middle temporal gyrus, and entorhinal cortex, and it is
compared to standard LSTM networks with data imputation and a parametric,
regression-based DPM method. The results show that the proposed algorithm
achieves a significantly lower mean absolute error (MAE) than the alternatives
with p < 0.05 using Wilcoxon signed rank test in predicting values of almost
all of the MRI biomarkers. Moreover, a linear discriminant analysis (LDA)
classifier applied to the predicted biomarker values produces a significantly
larger AUC of 0.90 vs. at most 0.84 with p < 0.001 using McNemar's test for
clinical diagnosis of AD. Inspection of MAE curves as a function of the amount
of missing data reveals that the proposed LSTM algorithm achieves the best
performance up until more than 74% missing values. Finally, it is illustrated
how the method can successfully be applied to data with varying time intervals.Comment: arXiv admin note: substantial text overlap with arXiv:1808.0550
Knowing what you know in brain segmentation using Bayesian deep neural networks
In this paper, we describe a Bayesian deep neural network (DNN) for
predicting FreeSurfer segmentations of structural MRI volumes, in minutes
rather than hours. The network was trained and evaluated on a large dataset (n
= 11,480), obtained by combining data from more than a hundred different sites,
and also evaluated on another completely held-out dataset (n = 418). The
network was trained using a novel spike-and-slab dropout-based variational
inference approach. We show that, on these datasets, the proposed Bayesian DNN
outperforms previously proposed methods, in terms of the similarity between the
segmentation predictions and the FreeSurfer labels, and the usefulness of the
estimate uncertainty of these predictions. In particular, we demonstrated that
the prediction uncertainty of this network at each voxel is a good indicator of
whether the network has made an error and that the uncertainty across the whole
brain can predict the manual quality control ratings of a scan. The proposed
Bayesian DNN method should be applicable to any new network architecture for
addressing the segmentation problem.Comment: Submitted to Frontiers in Neuroinformatic
Continual Learning in Medical Image Analysis: A Comprehensive Review of Recent Advancements and Future Prospects
Medical imaging analysis has witnessed remarkable advancements even
surpassing human-level performance in recent years, driven by the rapid
development of advanced deep-learning algorithms. However, when the inference
dataset slightly differs from what the model has seen during one-time training,
the model performance is greatly compromised. The situation requires restarting
the training process using both the old and the new data which is
computationally costly, does not align with the human learning process, and
imposes storage constraints and privacy concerns. Alternatively, continual
learning has emerged as a crucial approach for developing unified and
sustainable deep models to deal with new classes, tasks, and the drifting
nature of data in non-stationary environments for various application areas.
Continual learning techniques enable models to adapt and accumulate knowledge
over time, which is essential for maintaining performance on evolving datasets
and novel tasks. This systematic review paper provides a comprehensive overview
of the state-of-the-art in continual learning techniques applied to medical
imaging analysis. We present an extensive survey of existing research, covering
topics including catastrophic forgetting, data drifts, stability, and
plasticity requirements. Further, an in-depth discussion of key components of a
continual learning framework such as continual learning scenarios, techniques,
evaluation schemes, and metrics is provided. Continual learning techniques
encompass various categories, including rehearsal, regularization,
architectural, and hybrid strategies. We assess the popularity and
applicability of continual learning categories in various medical sub-fields
like radiology and histopathology..
How robotic surgery is changing our understanding of anatomy
The most recent revolution in our understanding and knowledge of the human body is the introduction of new technologies allowing direct magnified vision of internal organs, as in laparoscopy and robotics. The possibility of viewing an anatomical detail, until now not directly visible during open surgical operations and only partially during dissections of cadavers, has created a 'new surgical anatomy'. Consequent refinements of operative techniques, combined with better views of the surgical field, have given rise to continual and significant decreases in complication rates and improved functional and oncological outcomes. The possibility of exploring new ways of approaching organs to be treated now allows us to reinforce our anatomical knowledge and plan novel surgical approaches. The present review aims to clarify some of these issues. \ua9 2017 Arab Association of Urology
Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning
Self-supervised learning is an efficient pre-training method for medical
image analysis. However, current research is mostly confined to
specific-modality data pre-training, consuming considerable time and resources
without achieving universality across different modalities. A straightforward
solution is combining all modality data for joint self-supervised pre-training,
which poses practical challenges. Firstly, our experiments reveal conflicts in
representation learning as the number of modalities increases. Secondly,
multi-modal data collected in advance cannot cover all real-world scenarios. In
this paper, we reconsider versatile self-supervised learning from the
perspective of continual learning and propose MedCoSS, a continuous
self-supervised learning approach for multi-modal medical data. Unlike joint
self-supervised learning, MedCoSS assigns different modality data to different
training stages, forming a multi-stage pre-training process. To balance modal
conflicts and prevent catastrophic forgetting, we propose a rehearsal-based
continual learning method. We introduce the k-means sampling strategy to retain
data from previous modalities and rehearse it when learning new modalities.
Instead of executing the pretext task on buffer data, a feature distillation
strategy and an intra-modal mixup strategy are applied to these data for
knowledge retention. We conduct continuous self-supervised pre-training on a
large-scale multi-modal unlabeled dataset, including clinical reports, X-rays,
CT scans, MRI scans, and pathological images. Experimental results demonstrate
MedCoSS's exceptional generalization ability across nine downstream datasets
and its significant scalability in integrating new modality data. Code and
pre-trained weight are available at https://github.com/yeerwen/MedCoSS
- …