65 research outputs found
Domain generalization via model-agnostic learning of semantic features
Generalization capability to unseen domains is crucial for machine learning modelswhen deploying to real-world conditions. We investigate the challenging problemof domain generalization, i.e., training a model on multi-domain source data suchthat it can directly generalize to target domains with unknown statistics. We adopta model-agnostic learning paradigm with gradient-based meta-train and meta-testprocedures to expose the optimization to domain shift. Further, we introducetwo complementary losses which explicitly regularize the semantic structure ofthe feature space. Globally, we align a derived soft confusion matrix to preservegeneral knowledge about inter-class relationships. Locally, we promote domain-independent class-specific cohesion and separation of sample features with ametric-learning component. The effectiveness of our method is demonstrated withnew state-of-the-art results on two common object recognition benchmarks. Ourmethod also shows consistent improvement on a medical image segmentation task
AtrialGeneral: Domain Generalization for Left Atrial Segmentation of Multi-Center LGE MRIs
Left atrial (LA) segmentation from late gadolinium enhanced magnetic
resonance imaging (LGE MRI) is a crucial step needed for planning the treatment
of atrial fibrillation. However, automatic LA segmentation from LGE MRI is
still challenging, due to the poor image quality, high variability in LA
shapes, and unclear LA boundary. Though deep learning-based methods can provide
promising LA segmentation results, they often generalize poorly to unseen
domains, such as data from different scanners and/or sites. In this work, we
collect 210 LGE MRIs from different centers with different levels of image
quality. To evaluate the domain generalization ability of models on the LA
segmentation task, we employ four commonly used semantic segmentation networks
for the LA segmentation from multi-center LGE MRIs. Besides, we investigate
three domain generalization strategies, i.e., histogram matching, mutual
information based disentangled representation, and random style transfer, where
a simple histogram matching is proved to be most effective.Comment: 10 pages, 4 figures, MICCAI202
Contrast Adaptive Tissue Classification by Alternating Segmentation and Synthesis
Deep learning approaches to the segmentation of magnetic resonance images
have shown significant promise in automating the quantitative analysis of brain
images. However, a continuing challenge has been its sensitivity to the
variability of acquisition protocols. Attempting to segment images that have
different contrast properties from those within the training data generally
leads to significantly reduced performance. Furthermore, heterogeneous data
sets cannot be easily evaluated because the quantitative variation due to
acquisition differences often dwarfs the variation due to the biological
differences that one seeks to measure. In this work, we describe an approach
using alternating segmentation and synthesis steps that adapts the contrast
properties of the training data to the input image. This allows input images
that do not resemble the training data to be more consistently segmented. A
notable advantage of this approach is that only a single example of the
acquisition protocol is required to adapt to its contrast properties. We
demonstrate the efficacy of our approaching using brain images from a set of
human subjects scanned with two different T1-weighted volumetric protocols.Comment: 10 pages. MICCAI SASHIMI Workshop 202
Random Style Transfer based Domain Generalization Networks Integrating Shape and Spatial Information
Deep learning (DL)-based models have demonstrated good performance in medical
image segmentation. However, the models trained on a known dataset often fail
when performed on an unseen dataset collected from different centers, vendors
and disease populations. In this work, we present a random style transfer
network to tackle the domain generalization problem for multi-vendor and center
cardiac image segmentation. Style transfer is used to generate training data
with a wider distribution/ heterogeneity, namely domain augmentation. As the
target domain could be unknown, we randomly generate a modality vector for the
target modality in the style transfer stage, to simulate the domain shift for
unknown domains. The model can be trained in a semi-supervised manner by
simultaneously optimizing a supervised segmentation and an unsupervised style
translation objective. Besides, the framework incorporates the spatial
information and shape prior of the target by introducing two regularization
terms. We evaluated the proposed framework on 40 subjects from the M\&Ms
challenge2020, and obtained promising performance in the segmentation for data
from unknown vendors and centers.Comment: 11 page
Quantifying Graft Detachment after Descemet's Membrane Endothelial Keratoplasty with Deep Convolutional Neural Networks
Purpose: We developed a method to automatically locate and quantify graft
detachment after Descemet's Membrane Endothelial Keratoplasty (DMEK) in
Anterior Segment Optical Coherence Tomography (AS-OCT) scans. Methods: 1280
AS-OCT B-scans were annotated by a DMEK expert. Using the annotations, a deep
learning pipeline was developed to localize scleral spur, center the AS-OCT
B-scans and segment the detached graft sections. Detachment segmentation model
performance was evaluated per B-scan by comparing (1) length of detachment and
(2) horizontal projection of the detached sections with the expert annotations.
Horizontal projections were used to construct graft detachment maps. All final
evaluations were done on a test set that was set apart during training of the
models. A second DMEK expert annotated the test set to determine inter-rater
performance. Results: Mean scleral spur localization error was 0.155 mm,
whereas the inter-rater difference was 0.090 mm. The estimated graft detachment
lengths were in 69% of the cases within a 10-pixel (~150{\mu}m) difference from
the ground truth (77% for the second DMEK expert). Dice scores for the
horizontal projections of all B-scans with detachments were 0.896 and 0.880 for
our model and the second DMEK expert respectively. Conclusion: Our deep
learning model can be used to automatically and instantly localize graft
detachment in AS-OCT B-scans. Horizontal detachment projections can be
determined with the same accuracy as a human DMEK expert, allowing for the
construction of accurate graft detachment maps. Translational Relevance:
Automated localization and quantification of graft detachment can support DMEK
research and standardize clinical decision making.Comment: To be published in Translational Vision Science & Technolog
- …